Systems of record store the final state. Context graphs store the reasoning, exceptions, and approvals that made that state legitimate.
Source: Foundation Capital – Jaya Gupta and Ashu Garg | Date: December 22, 2025
The Problem
Enterprise systems record what changed. They rarely record why.
Salesforce stores the discount that closed the deal. Zendesk stores that the ticket was escalated. PagerDuty stores that an incident fired. What usually disappears is the cross-system reasoning that connected customer value, policy constraints, exceptions, and approvals into a single decision.
Humans papered over that gap with meetings, Slack threads, and experienced operators. Agents cannot. If the organization never preserved the reasoning behind past exceptions, the agent inherits the same ambiguity with less institutional intuition to compensate.

What a Context Graph Actually Is
Foundation’s useful distinction is between rules and decision traces. Rules say what should usually happen. Decision traces capture what happened in this case, under which inputs, policy version, exception path, and approval.
A context graph is the accumulated structure formed by those traces across entities and time. It is not the model’s hidden chain of thought. It is a durable graph of business objects, decision events, evidence, approvals, and outcomes.
| Node Type | Example |
|---|---|
| Business entity | Account, renewal, contract, ticket, incident |
| Decision event | “Approved 20% renewal discount” |
| Evidence | Three SEV-1 incidents, open churn-risk escalation |
| Policy | Renewal discount cap, SLA exception rule |
| Human actor | Finance approver, support lead, VP |
| Agent run | The orchestration pass that gathered context and acted |
| Edge | Meaning |
|---|---|
AFFECTED |
Which entity this decision changed |
JUSTIFIED_BY |
Which evidence supported it |
EVALUATED_AGAINST |
Which policy or rule set applied |
APPROVED_BY |
Who authorized the exception |
SIMILAR_TO |
Which earlier cases serve as precedent |
EXECUTED_IN |
Which agent run or workflow created the trace |
That structure turns “why did we do that?” into a query rather than an archaeological exercise.
Why Agents Make This More Valuable
A CRM sees the opportunity. A support system sees the ticket. A warehouse may see historical snapshots after ETL. The orchestration layer sees the decision moment itself: which systems were queried, what evidence was retrieved, which rule was evaluated, whether an exception route was opened, and what final action was committed.
If that moment is captured, the organization gets three things at once:
- Auditability. You can replay the context behind an automated or human-approved decision.
- Precedent reuse. Similar cases can inherit prior reasoning instead of starting from zero.
- Autonomy ratcheting. Human-in-the-loop decisions gradually become partially automatable because the system has a library of approved exception patterns.

This is bigger than a retrieval trick. It is a new data asset. Every reviewed decision adds another durable precedent edge.
Why Existing Systems Miss the Decision Moment
This is also why incumbents have a harder time owning the layer than it first appears.
Traditional systems of record optimize for current state. They are excellent at telling you what an object looks like now. Warehouses optimize for historical analysis after ingestion. They can tell you how a number changed over time. Neither is naturally positioned to capture the exact moment when context from five systems was weighed, a rule was interpreted, an exception path was opened, and a human approved the deviation.
That is not a governance gap. It is an architectural position gap.
If a platform is downstream from the decision event, it can often reconstruct inputs after the fact, but it usually cannot prove which inputs were actually considered, in what policy context, and with which approval chain. The orchestration layer can. It is the only layer sitting in the path where context becomes action.
That is why the strategic value is so high. The system that captures decision lineage at commit time does not just add automation on top of existing software. It starts owning a category of truth the older stack never really stored.
Context Graphs Are Not Just GraphRAG
GraphRAG is a retrieval strategy. It organizes knowledge so queries can expand along relationships instead of relying only on similarity search. A context graph is a data model and system-of-record candidate that captures decision lineage in the execution path.
In practice, the two fit together:
- The context graph is the structured memory layer.
- GraphRAG is one way to retrieve from that layer.
- Progressive Disclosure determines how much of the graph the agent loads for the current decision.
- Skill Graphs show the same graph idea applied to agent knowledge rather than operational history.
That distinction matters because many teams hear “graph” and think the opportunity is better retrieval. Retrieval matters, but the deeper moat is owning the decision lineage that nobody else stored.
A Concrete Example
Imagine a renewal agent recommends a 20% discount.
The written policy says renewals cap at 10% unless service-impact exceptions are approved. To make the recommendation, the agent gathers:
- three SEV-1 incidents from PagerDuty in the last quarter
- a Zendesk escalation marked “cancel unless fixed”
- the account’s ARR and renewal risk from the CRM
- a prior exception where a VP approved a similar discount for a similar outage pattern
Finance reviews the packet and approves. The CRM ends up with one fact: discount = 20%.
Without a context graph, the reasoning disappears. Six months later the company sees another noisy renewal and has to re-litigate the edge case in Slack.
With a context graph, the organization stores a decision record that links the account, supporting incidents, policy version, approver, comparable precedent, and final writeback. The next agent retrieves the structure of a prior judgment, not just similar text.
How To Build One Without Waiting for Full Autonomy
- Instrument the orchestration layer. Every agent run that can change state should emit a structured decision record.
- Capture evidence references, not just summaries. Store the object IDs, URLs, policy versions, and approver identities that support replay.
- Separate proposed action from approved action. This is essential for evaluation and blame-free debugging.
- Model exceptions explicitly. The value is usually in the “why we deviated” path, not the happy path.
- Start human-in-the-loop. You do not need full autonomy for the graph to begin compounding.
- Turn repeated approvals into policy candidates. When the same exception recurs, promote it into an explicit rule.
This connects directly to Institutional Memory via Learning Files. The difference is that institutional memory usually lives as documents written after the fact. A context graph captures the operational memory inline, at the moment the decision is made.
Why This Could Produce New Systems of Record
The last enterprise generation created huge companies by becoming the canonical home for objects: customer records, HR records, financial ledgers. The agent era may create another layer of systems of record by becoming the canonical home for decisions.
The winning product is unlikely to be the one that merely talks to many systems. It is more likely to be the one that sits in the workflow, captures the decision trace at commit time, and turns exceptions into searchable precedent. That graph becomes hard to replace because it answers the questions every operator eventually asks:
- Why was this allowed?
- Who approved it?
- Which evidence mattered?
- Was this consistent with prior cases?
- Can we automate this next time?
Those are control questions. Whoever owns that layer owns a large part of the autonomy stack.
Key Takeaway
Context graphs matter because agents expose a category of enterprise data that was always important but rarely persisted: the reasoning path between raw state and final action.
Systems of record store what is true now. Context graphs store how the organization decided what was permissible. For autonomous systems, that is not ancillary metadata. It is the memory that makes judgment reusable.


