The Three Laws¶
Operating principles
Three laws for decision-grade AI.
These are not slogans. They translate directly into architecture: evidence gates, constraint enforcement, and accountable decision ownership.
Law 1: No answer without evidence
If the system can’t point to a source, it should say “I don’t know”.
Evidence is a gate: it prevents plausible-but-wrong claims from entering high-stakes workflows.
Implementation requirements
- Outputs carry citations/provenance (document, section, timestamp, version).
- Claims are separated into facts vs hypotheses vs assumptions.
- Missing evidence triggers abstention or escalation.
Law 2: Order before speed
Structure the domain before automating decisions.
The fastest way to ship unreliable AI is to automate first and model the domain later.
Implementation requirements
- Define core concepts and relations (what exists, how it connects).
- Encode constraints (what must never happen; what is allowed only under conditions).
- Version the knowledge layer; treat changes as operational risk.
Law 3: Humans remain accountable
AI assists, simulates, and recommends. Humans own responsibility.
Accountability can be supported by AI; it cannot be outsourced to it.
Implementation requirements
- Explicit decision owner per workflow (role, escalation path).
- Audit trail: what was proposed, why, what evidence, what constraints, who approved.
- Clear separation between “advisor mode” and “action mode”.
Diagram: evidence gate (non-negotiable)
flowchart LR;
Q["Question / decision"] --> E["Evidence available?"];
E -->|"No"| A["Abstain / escalate"];
E -->|"Yes"| V["Verify + trace"];
V --> O["Output + provenance"];
Diagram: human accountability in the loop
flowchart TB;
S["System proposes"] --> J["Human judgment"];
J -->|"Approve"| X["Execute / publish"];
J -->|"Reject"| R["Revise / request more evidence"];
X --> L["Log decision + rationale"];
R --> L;