brModel™ Methodology¶
Methodology
A causal operating system for AI memory.
Instead of starting with “Which LLM?”, we start with memory and constraints — the parts that survive model churn. The goal is decision-grade behavior: traceable, governable, and able to abstain.
Methodology map (pages and how they connect)¶
flowchart TB
%% Styles (brModel Standard)
classDef i fill:#D3D3D3,stroke-width:0px,color:#000;
classDef p fill:#B3D9FF,stroke-width:0px,color:#000;
classDef r fill:#FFFFB3,stroke-width:0px,color:#000;
classDef o fill:#C1F0C1,stroke-width:0px,color:#000;
classDef s fill:#FFB3B3,stroke-width:0px,color:#000;
I_Start(["🧭 Start here: what must be decision-grade?"]):::i
P_Prim("🧩 Core primitives (entities, relations, provenance)"):::p
P_Graphs("🕸️ Property + knowledge graphs (meaning, ontology)"):::p
P_Constraints("🔒 Constraints & SHACL (what is allowed)"):::p
P_Baseline("🧱 LLM + Tool + RAG (baseline pipeline)"):::p
P_Causal("🧠 CausalGraphRAG (paths, not paragraphs)"):::p
P_brCausal("✅ brCausalGraphRAG (validation + refusal + audit)"):::p
R_Trace(["🧾 Trace object (evidence + provenance + rationale)"]):::r
O_Behavior(["✅ Decision-grade behavior (governable + testable)"]):::o
P_Gov("🏛️ Governance approach"):::p
I_Start --> P_Prim --> P_Graphs --> P_Constraints
P_Constraints --> P_Baseline
P_Baseline --> P_Causal --> P_brCausal --> R_Trace --> O_Behavior
P_Constraints --> P_Gov --> O_Behavior
%% Clickable nodes
click P_Prim "core-primitives/" "Core primitives"
click P_Graphs "property-and-knowledge-graphs/" "Property & Knowledge Graphs"
click P_Constraints "constraints/" "Constraints & SHACL"
click P_Baseline "llm-tool-rag/" "LLM + Tool + RAG"
click P_Causal "causalgraphrag/" "CausalGraphRAG"
click P_brCausal "brcausalgraphrag/" "brCausalGraphRAG"
click P_Gov "/reasoners/governance/" "Governance"
How to read this: start at 🧭 decision-grade stakes, then move through 🧩 primitives and 🕸️ graph semantics into 🔒 constraints. From there you can follow the baseline 🧱 LLM + Tool + RAG path, upgrade to 🧠 CausalGraphRAG, and land in ✅ brCausalGraphRAG where outputs become 🧾 trace objects and ✅ governable behavior.
Mental model¶
brModel™ treats knowledge as a causal graph, not a pile of text chunks.
Facts become nodes with provenance; relationships encode mechanisms and allowed transformations; rules become enforceable constraints.
Audio: Simulating Interventions With Executable Causal Clauses
flowchart TB
%% Styles (brModel Standard)
classDef i fill:#D3D3D3,stroke-width:0px,color:#000;
classDef p fill:#B3D9FF,stroke-width:0px,color:#000;
classDef r fill:#FFFFB3,stroke-width:0px,color:#000;
classDef o fill:#C1F0C1,stroke-width:0px,color:#000;
classDef s fill:#FFB3B3,stroke-width:0px,color:#000;
I_Q(["❓ Question / task"]):::i
R_Query(["📌 Query spec (intent + constraints)"]):::r
P_Retrieve("🔎 Retrieve candidates"):::p
R_Facts(["📎 Fact pack (typed claims + provenance)"]):::r
P_Traverse("🕸️ Search graph paths"):::p
R_Paths(["🧭 Path candidates (mechanisms + evidence)"]):::r
D_Gate{"✅ Valid under constraints?"}:::s
R_Trace(["🧾 Trace object (evidence + provenance)"]):::r
O_Out(["✅ Answer or abstain (audit-ready)"]):::o
S_Stop(["🛑 Abstain or escalate (why it failed)"]):::i
I_Q --> R_Query --> P_Retrieve --> R_Facts --> P_Traverse --> R_Paths --> D_Gate
D_Gate -->|"Yes"| R_Trace --> O_Out
D_Gate -->|"No"| S_Stop
%% Clickable nodes
click P_Traverse "causalgraphrag/" "CausalGraphRAG"
click R_Trace "brcausalgraphrag/" "brCausalGraphRAG"
Mechanism: a ❓ question becomes a 📌 query spec, flows through 🔎 retrieval and 🕸️ path search, then hits a ✅ validity gate. Passing yields a 🧾 trace object and a ✅ audit-ready answer; failing yields 🛑 abstain/escalate with a reason.
The cognitive stack (high level)¶
We separate immutable reality from decision-making layers:
- Facts & provenance (what happened, where it came from)
- Domain models (what concepts mean)
- Constraints (what is allowed)
- Plans & predictions (what to do next, and what might happen)
flowchart TB
%% Styles (brModel Standard)
classDef i fill:#D3D3D3,stroke-width:0px,color:#000;
classDef p fill:#B3D9FF,stroke-width:0px,color:#000;
classDef r fill:#FFFFB3,stroke-width:0px,color:#000;
classDef o fill:#C1F0C1,stroke-width:0px,color:#000;
classDef s fill:#FFB3B3,stroke-width:0px,color:#000;
I_F(["📌 Facts + sources"]):::i
R_Feed(["📦 Fact ledger (versioned evidence)"]):::r
P_M("📚 Model meaning"):::p
R_Model(["🗺️ Domain model artifact (concepts + relations)"]):::r
P_G("🔒 Compile constraints"):::p
R_Rules(["📐 Constraint set (enforceable rules)"]):::r
D_Allow{"✅ Allowed under rules?"}:::s
P_P("🧭 Synthesize prescription"):::p
R_Plan(["🧾 Plan artifact (steps + guards)"]):::r
P_S("🧪 Run prediction"):::p
R_Sim(["📊 Scenario outcomes (assumptions exposed)"]):::r
D_Stable{"✅ Stable under counterfactual check?"}:::s
O_D(["✅ Decision + trace (or refusal)"]):::o
S_Block(["🛑 Refuse or escalate (rule or stability failure)"]):::i
I_F --> R_Feed --> P_M --> R_Model --> P_G --> R_Rules --> D_Allow
D_Allow -->|"Yes"| P_P --> R_Plan --> P_S --> R_Sim --> D_Stable
D_Stable -->|"Yes"| O_D
D_Allow -->|"No"| S_Block
D_Stable -->|"No"| S_Block
%% Clickable nodes
click P_G "/reasoners/governance/" "Governance"
click P_M "property-and-knowledge-graphs/" "Graphs"
click P_G "constraints/" "Constraints & SHACL"
Stack logic: facts become a 📦 versioned ledger, meaning becomes a 🗺️ model artifact, governance compiles into 📐 enforceable rules, and decisions only proceed when ✅ allowed and ✅ stable under counterfactual checks. Otherwise you get 🛑 refusal/escalation instead of fluent drift.
Why this reduces hallucinations¶
Edges constrain reasoning
A model can’t “invent a relationship” if it must traverse an existing graph edge.
Constraints enforce policy
A policy can’t be bypassed if it’s encoded as an enforcement gate.
Debugging becomes concrete
You can localize failures to data, model behavior, or missing rules.
Concept map (how vs why)¶
Methodology is the how. Philosophy is the why.
- Philosophy: AI Agent vs Agentic AI
- Philosophy: Correlation vs Causality
- Philosophy: AI Consciousness (operational view)
- Methodology: Property Graphs & Knowledge Graphs
- Methodology: LLM + Tool + RAG
- Methodology: CausalGraphRAG
- Methodology: brCausalGraphRAG
Model diagrams¶
These diagrams are native to Methodology. They summarize the layer model, the memory schema, and the end-to-end decision-grade pipeline.
flowchart TB
%% Styles (brModel Standard)
classDef i fill:#D3D3D3,stroke-width:0px,color:#000;
classDef p fill:#B3D9FF,stroke-width:0px,color:#000;
classDef r fill:#FFFFB3,stroke-width:0px,color:#000;
classDef o fill:#C1F0C1,stroke-width:0px,color:#000;
classDef s fill:#FFB3B3,stroke-width:0px,color:#000;
I_Prob(["Input problem, context, stakes"]):::i
P_L0("L0 Build causal model"):::p
R_L0(["L0 brCD artifact (executable causal claims)"]):::r
P_L1("L1 Type schema"):::p
R_L1(["L1 Schema artifact (Node Element Metric Cause Transfer)"]):::r
D_Shape{"✅ Schema consistent?"}:::s
P_L2("L2 Encode DSL"):::p
R_L2(["L2 DSL artifact (Source Subject Process Relation Object)"]):::r
P_L3("L3 Compile knowledge"):::p
R_L3(["L3 Knowledge artifact (patterns + semantics)"]):::r
P_L4("L4 Record experience"):::p
R_L4(["L4 Experience artifact (observations + instances)"]):::r
P_L5("L5 Synthesize prescription"):::p
O_L5(["L5 Prescription artifact (plans + workflows)"]):::o
P_L6("L6 Simulate prediction"):::p
O_L6(["L6 Prediction artifact (scenarios + counterfactuals)"]):::o
D_Ready{"✅ Decision-ready?"}:::s
O_Pack(["✅ Decision package (trace + plan + limits)"]):::o
S_Revise(["🛑 Revise model or request more data"]):::i
I_Prob --> P_L0 --> R_L0 --> P_L1 --> R_L1 --> D_Shape
D_Shape -->|"Yes"| P_L2 --> R_L2 --> P_L3 --> R_L3 --> P_L4 --> R_L4 --> P_L5 --> O_L5 --> P_L6 --> O_L6 --> D_Ready
D_Shape -->|"No"| S_Revise
D_Ready -->|"Yes"| O_Pack
D_Ready -->|"No"| S_Revise
S_Revise -. "model update" .-> P_L0
click R_L1 "core-primitives/" "Core primitives"
click R_L2 "property-and-knowledge-graphs/" "Graphs"
click R_L3 "property-and-knowledge-graphs/" "Knowledge graph semantics"
Layer model: each blue step produces a concrete artifact: L0 yields 🧾 brCD causal claims, L1 yields typed schema, L2 yields a domain DSL, L3 yields knowledge patterns, and higher layers yield plans and scenarios. Two decision points (✅ schema consistent, ✅ decision-ready) force revision instead of silent incoherence.
flowchart TB
%% Styles (brModel Standard)
classDef i fill:#D3D3D3,stroke-width:0px,color:#000;
classDef p fill:#B3D9FF,stroke-width:0px,color:#000;
classDef r fill:#FFFFB3,stroke-width:0px,color:#000;
classDef o fill:#C1F0C1,stroke-width:0px,color:#000;
classDef s fill:#FFB3B3,stroke-width:0px,color:#000;
I_S(["Sources, documents, systems"]):::i
P_Extract("Extract and type"):::p
R_Typed(["Typed statements (claims + provenance)"]):::r
P_Graph("Assemble graph snapshot"):::p
R_Snap(["Graph snapshot (typed nodes and edges)"]):::r
P_Con("Enforce constraints"):::p
R_Valid(["Validated graph (or violations)"]):::r
D_Conform{"✅ Conforms?"}:::s
P_Path("Search under constraints"):::p
R_Path(["Path result (mechanism + evidence)"]):::r
D_Evidence{"✅ Enough evidence?"}:::s
R_Trace(["Trace object (evidence + rules)"]):::r
O_Out(["Decision-grade output or refusal"]):::o
S_Viol(["🛑 Refuse or return violations"]):::i
S_More(["🛑 Ask for more data or narrower question"]):::i
I_S --> P_Extract --> R_Typed --> P_Graph --> R_Snap --> P_Con --> R_Valid --> D_Conform
D_Conform -->|"Yes"| P_Path --> R_Path --> D_Evidence
D_Conform -->|"No"| S_Viol
D_Evidence -->|"Yes"| R_Trace --> O_Out
D_Evidence -->|"No"| S_More
click R_Typed "core-primitives/" "Core primitives"
click P_Graph "property-and-knowledge-graphs/" "Property and knowledge graphs"
click P_Con "constraints/" "Constraints and SHACL"
click P_Path "causalgraphrag/" "CausalGraphRAG"
click R_Trace "brcausalgraphrag/" "brCausalGraphRAG"
Decision-grade pipeline: sources become typed statements, then a graph snapshot, then a validated graph. Only if it ✅ conforms do we search paths; only if there is ✅ enough evidence do we emit a 🧾 trace object and a ✅ decision-grade output. Otherwise we return 🛑 violations or 🛑 request missing data.
flowchart TB
%% Styles (brModel Standard)
classDef i fill:#D3D3D3,stroke-width:0px,color:#000;
classDef p fill:#B3D9FF,stroke-width:0px,color:#000;
classDef r fill:#FFFFB3,stroke-width:0px,color:#000;
classDef o fill:#C1F0C1,stroke-width:0px,color:#000;
classDef s fill:#FFB3B3,stroke-width:0px,color:#000;
I_Raw(["Raw observations and claims"]):::i
P_Prim("Normalize primitives"):::p
R_Prims(["Primitives (Node Element Metric Cause Transfer)"]):::r
P_DSL("Map to DSL roles"):::p
R_DSL(["DSL roles (Source Subject Process Relation Object)"]):::r
P_Edges("Type edges"):::p
R_Edge(["Edge families (Influence Inheritance)"]):::r
D_Types{"✅ Types consistent?"}:::s
D_Policy{"✅ Allowed by policy?"}:::s
P_Write("Write versioned memory"):::p
O_Mem(["Graph commit (auditable state)"]):::o
S_Reject(["🛑 Reject update (type or policy failure)"]):::i
I_Raw --> P_Prim --> R_Prims --> P_DSL --> R_DSL --> P_Edges --> R_Edge --> D_Types
D_Types -->|"Yes"| D_Policy
D_Types -->|"No"| S_Reject
D_Policy -->|"Yes"| P_Write --> O_Mem
D_Policy -->|"No"| S_Reject
click R_Prims "core-primitives/" "Core primitives"
click R_DSL "property-and-knowledge-graphs/" "Graphs"
click R_Edge "causalgraphrag/" "CausalGraphRAG"
click P_Write "brcausalgraphrag/" "Trace objects and memory writes"
Memory write discipline: raw claims are normalized into primitives, mapped to DSL roles, and typed into edge families. Two gates (✅ types consistent, ✅ allowed by policy) prevent invalid commits; passing produces a ✅ auditable graph commit, failing produces a 🛑 rejected update.
Next pages (skeleton)¶
- Engagement patterns: Services
- Applied outcomes: Case Studies
- Real example: Biomedicine