From Evidence to Decisions
How the architecture extends · April 2026
The graph is the product
Idora’s value is not the verification check or the execution observation. Those are inputs. The value is the joined, compounding graph where all integrity evidence converges over time. Code platforms see code and CI. Agent platforms see their own actions. Observability tools see runtime. Pipeline tools see builds. No existing tool joins requirement conformance verification to execution provenance in one continuous, queryable model. No vendor will build it. Doing so requires treating every other vendor’s evidence as an equal input, which is structurally incompatible with being a vendor. Idora is built to be that place.
The accumulated evidence is the moat. It is proprietary to each organization and cannot be retroactively generated. A competitor who starts later can match every feature. They cannot match the data six months in. For the full compounding argument see The Compounding Moat.
The architecture was designed so that when adjacent platforms emit structured evidence, it can be ingested into the graph. Claude Code session history, Managed Agents event logs, CI execution records, and pipeline attestations can all be ingested as evidence through the same receipt pipeline. The ecosystem getting stronger makes Idora more valuable, not less.
Three layers, one graph
a16z published hard data in April 2026: 29% of Fortune 500 companies are live paying customers of AI coding startups, with the majority of that adoption in code. The population that needs Layer 1 today is not projected. It is documented.
The verification agent checks code against requirements. The execution observer captures what files went into each build, what artifacts came out, what was tested, and what was deployed. Both produce tamper-evident, cryptographically hashed receipts. The File bridges both streams: the same file that was verified is the file that was built and deployed.
Why it compounds: Early verifications require AI inference to discover which requirements govern which files. As the graph gets denser, those mappings resolve by simple lookup instead. The product gets smarter and more efficient with every push.
Why it matters: Any repo, any pipeline, value in week one. No integration project. No org-wide rollout. The graph starts compounding from the first push. Engineering teams query it before release decisions, during incident response, and for audit readiness. The graph surfaces the answer. The team owns the decision.
Coding agents can query the graph from Layer 1. Before modifying a file, an agent reads which requirements govern it and whether any violations are active. This is distinct from agent accountability, which is the Layer 3 story. Layer 1 gives agents integrity context. Layer 3 makes agent actions accountable in the graph. Both matter. Only one requires waiting.
“Was the code that shipped also the code that was verified? Show the full path from requirement to deployed artifact.”Inside a single repo, the File bridges verification and execution. Across repos, the shared entity becomes the Service (a roadmap node type extending the current graph): the deployable unit that depends on other services. When Service A depends on Service B, a deploy receipt captures the integrity state of Service B at that moment, including evidence from external sources: platform attestations, pipeline metadata, dependency scan results, agent decision logs. Each external source is a lightweight connector. Once it emits structured evidence, ingesting it makes the graph denser at near-zero marginal cost.
Why it locks in: Cross-service integrity state is not continuously maintained by any existing tool. Reconstructing it after the fact requires significant engineering labor. The receipt architecture extends naturally to cross-repo and cross-service evidence using the same pipeline. Two regulatory forcing functions are now active. EU CRA vulnerability reporting obligations begin September 11, 2026, creating immediate demand for provable build and dependency traceability. Full CRA compliance applies December 11, 2027. Decision context accumulated since Layer 1 means each service boundary already carries rich evidence when expansion begins.
“Is it safe to deploy Service A, given the integrity state of everything it depends on?”Compliance audits query the graph. Deployment decisions check the graph. Agent accountability resolves in the graph. As AI agents make autonomous decisions across the delivery lifecycle, every agent action at a pipeline boundary produces a receipt verified against the relevant policy.
Why the structural position is established now: Anthropic’s Managed Agents, launched April 2026, produces a full append-only event stream of agent execution within their platform. It stops at the Anthropic boundary. It does not connect to the requirement that started the work or the artifact that shipped. The most capable AI company in the world building the best possible within-boundary session capture still leaves the cross-boundary integrity question open. No single agent platform logs what all agents did across all boundaries. The vendor who sells the agent cannot also be the trusted auditor of the agent. An independent integrity layer does not have that conflict. That position is not a 2028 prediction. It is confirmed today.
Why it gets more capital efficient: The mix shifts from generating evidence (AI inference) toward ingesting evidence from external sources (structured data) and serving queries (pure software). Cost to serve decreases while graph value increases.
“Prove that every change in this deployment chain, human and agent, conformed to policy.”Why this is one graph, not three products
Each layer expands scope, not architecture. Layer 1 captures verification and execution evidence linked to files. Layer 2 adds a Service entity, cross-repo connections, and external evidence ingestion using the same receipt pattern. Layer 3 adds agent accountability receipts linked to the same Services and Files. No new database. No separate product. Decision context (gate states, overrides, approvals) is a continuous enrichment across all layers, not a discrete step.
This is what distinguishes the graph from per-event attestation frameworks. Attestations are records of individual events. The graph preserves the relationships between those events across time, repos, and deployment boundaries, compounding with every push.
A decision trace that says “this deployment was approved” is an assertion. A decision trace anchored to a cryptographic receipt that links the approved artifact to its verified source files through a tamper-evident chain is proof. The graph is where assertions become proof.
We build Idora with Idora
We use our own integrity graph internally. Every AI-assisted code change is verified against our own specifications. Every build and deployment is observed and recorded. We use Claude Code for a meaningful share of our development. Before any agent session that touches a file with an active requirement, the pre-flight query runs: the agent reads prior conformance outcomes, active violations, and failure patterns before writing a line of code. As the internal graph compounds, verification gets more efficient and agent sessions get better. The integrity graph captures what was verified, what was built, and what shipped, so we know the product matches its specifications at every stage. When a team asks “does this work?” the answer is: we build with it every day. Here’s our graph.
Where this leads
In April 2026 alone: a16z confirmed 29% of the Fortune 500 are live on AI coding tools. Anthropic confirmed the boundary gap with Managed Agents. A leading seed investor named the vendor-neutral layer as the winning category. A testing platform launched with the same founding premise. EU CRA reporting obligations are four months away. Every signal feeds the same conclusion. More AI-generated code means more changes that need verification. More agent autonomy means more decisions that need accountability. More regulatory pressure means more demand for the joined, auditable record. More vendors producing structured evidence means more input streams at lower cost to serve. The graph gets denser and the moat gets deeper with every layer.
Start with the requirement that authorized the change, the code that implemented it, the build that produced it, and the deployment that shipped it. That chain is the proof. Expand as the ecosystem feeds the graph. Become the integrity layer that no single vendor can replace. One graph. One traversal. Permanent.
The long-term position is not verification infrastructure. It is the system of record for governed software delivery. The specification, the verification, the conformance record, and the proof are all the same graph. Every organization running AI-assisted development will eventually need to be authoritative about what was decided, what was built, and whether the two match. The category is forming now. The platform is what it becomes.