Skip to content

brModel™ Methodology

Methodology

A causal operating system for AI memory.

Instead of starting with “Which LLM?”, we start with memory and constraints — the parts that survive model churn. The goal is decision-grade behavior: traceable, governable, and able to abstain.

Provenance-first Constraint gates Causal traversal Audit-ready traces

Mental model

brModel™ treats knowledge as a causal graph, not a pile of text chunks.

Facts become nodes with provenance; relationships encode mechanisms and allowed transformations; rules become enforceable constraints.

flowchart LR;
  Q["User question"] --> R["Retrieve facts"];
  R --> C["Causal graph traversal"];
  C --> T["Trace + citations"];
  T --> A["Answer or abstain"];

The cognitive stack (high level)

We separate immutable reality from decision-making layers:

  • Facts & provenance (what happened, where it came from)
  • Domain models (what concepts mean)
  • Constraints (what is allowed)
  • Plans & predictions (what to do next, and what might happen)
flowchart TB;
  subgraph Objective["Objective layer"];
    F["Facts + sources"];
    M["Domain model"];
    G["Governance constraints"];
  end;
  subgraph Decision["Decision layer"];
    P["Plan / prescription"];
    S["Simulation / prediction"];
  end;
  F --> M --> G --> P --> S;

Why this reduces hallucinations

Edges constrain reasoning

A model can’t “invent a relationship” if it must traverse an existing graph edge.

Constraints enforce policy

A policy can’t be bypassed if it’s encoded as an enforcement gate.

Debugging becomes concrete

You can localize failures to data, model behavior, or missing rules.

Concept map (how vs why)

Methodology is the how. Philosophy is the why.

Model diagrams (open in modal)

Click any diagram to open it in a modal and inspect the model without leaving the page.

AI Agent vs Agentic AI Correlation vs Causality Property & Knowledge Graphs LLM + Tool + RAG CausalGraphRAG brCausalGraphRAG

AI Agent vs Agentic AI

flowchart TB; subgraph ToolUse["AI Agent (tool-using)"]; U["User"] --> Q["Question"]; Q --> L["LLM"]; L --> T["Tools"]; T --> L; L --> A["Answer"]; end; subgraph Agentic["Agentic AI (system property)"]; G["Goal"] --> P["Plan"]; P --> X["Act"]; X --> O["Observe"]; O --> M["Memory"]; M --> P; O --> V["Validate constraints"]; V -->|"Fail"| S["Stop / abstain / escalate"]; V -->|"Pass"| P; end;

Correlation vs Causality (confounding)

graph LR; C["Confounder C"] --> X["X"]; C --> Y["Y"]; X --> Y;

Property Graphs vs Knowledge Graphs

flowchart LR; PG["Property Graph (nodes/edges + properties)"] --> KG["Knowledge Graph (ontology + constraints + meaning)"]; KG --> Q["Queries with validity guarantees"];

LLM + Tool + RAG (baseline)

flowchart LR; U["User"] --> L["LLM"]; L -->|"Search / retrieve"| R["RAG"]; R --> L; L -->|"Call tools"| T["Tools / APIs"]; T --> L; L --> A["Answer"];

CausalGraphRAG (paths, not paragraphs)

flowchart LR; Q["Question"] --> S["Start node(s)"]; S --> P["Path search with constraints"]; P --> T["Trace + evidence"]; T --> A["Answer or abstain"];

brCausalGraphRAG (decision-grade)

flowchart TB; Q["Question"] --> S["Select start nodes"]; S --> P["Constrained path search"]; P --> V["Validate shapes / constraints"]; V -->|"Pass"| T["Generate trace object"]; T --> A["Answer with evidence"]; V -->|"Fail"| X["Abstain / escalate"];
## Next pages (skeleton) - Engagement patterns: [Services](../services/) - Applied outcomes: [Case Studies](../case-studies/) - Real example: [SK Biomedicine](../case-studies/biomedicine/)