Skip to content

SK Biomedicine: Mechanism Discovery

Case study → biomedicine

Mechanism discovery: from “relevant papers” to testable causal chains.

The question is not whether two concepts co-occur in text. The question is whether there is a mechanistic chain you can inspect, challenge, and experimentally validate.

The causal question

How do we uncover mechanistic chains (not just correlations) around targets like CA IX in tumor microenvironments?

Why probabilistic search fails (even when it is “honest”)

Retrieval returns relevance

“Here are papers about CA IX” does not equal “here is a chain that explains the outcome.”

Text summaries blur mechanisms

Models can produce cautious language (“evidence is mixed”) without specifying what would falsify which link.

No trace = no lab plan

Without a structured path and citations per edge, you can’t design targeted experiments.

What changes with causal traversal

We encode entities, interactions, and provenance into a causal graph and run directed pathfinding.

The output is a candidate mechanism with evidence per edge — or an abstention with missing data requirements.

flowchart LR;
  CA["CA IX"] --> PH["Extracellular pH"];
  PH --> PROT["Proteases"];
  PROT --> INV["Invasiveness"];

Diagram: evidence and provenance per edge

flowchart TB;
  S["Source (paper / dataset)"] --> C["Claim"];
  C --> E["Edge assertion"];
  E --> P["Path candidate"];
  P --> T["Trace object"];

Outputs

Traceable paths

Causal chains with supporting sources and versioned evidence.

Hypotheses

Candidates ranked by mechanistic plausibility, not by rhetorical fluency.

Falsification plan

Clear missing evidence and which link would change the conclusion.

Iterability

A model that improves as new studies arrive without losing auditability.