Where Idora Sits in the Stack

The integrity layer for software delivery · April 2026


The enterprise AI coding stack now has three distinct layers. Most teams have the first two. The third is where the category is forming.

Layer 01
Execution
Where work happens. AI coding agents write code, tickets hold requirements, CI runs builds, pipelines deploy artifacts. Each tool does its job well within its own boundary.
Claude CodeCursorGitHub CopilotJiraGitHub ActionsCI/CD
Layer 02
Orchestration
Routes work between tools and models. Decides which agent runs, which model handles which task, how requests flow across vendors. Answers: what ran, and when.
Managed AgentsLangChainAI GatewaysA2AMCP
Layer 03
Integrity
Connects what every tool produced into one tamper-evident, compounding record. Answers the question no other layer can: did what shipped match what was decided.
Idora

The execution layer

Claude Code is now the most-used tool in engineering, ahead of both Copilot and Cursor. Gartner reports 78% of Fortune 500 companies have some form of AI-assisted development in production. The execution layer is established, fast-moving, and generating more AI-authored code than any review process was designed to absorb.

The orchestration layer

Anthropic Managed Agents, LangChain, enterprise AI gateways, and the emerging A2A and MCP standards all operate here. This layer is maturing rapidly and answers a real question: which agent ran, which model handled the request, when did it complete. It does not answer whether the output matched the requirement that started the work.

The integrity layer · Idora

Every team can tell you if tests passed. Almost none can tell you whether what shipped matched what was decided. The requirement lived in Jira. The build lived in CI. The deployment lived in the pipeline. Each system produced an accurate record of what it saw. None of them could see what the others saw. When the question arrives the answer requires reconstruction from multiple systems every time. Idora makes it a single query.

This is not observability. Observability tells you what your systems are doing. The integrity layer tells you whether what your systems shipped matched what was decided. Those are different questions. Every significant event in the delivery pipeline produces a tamper-evident receipt. Requirements from any source. Code verified against them. Builds, tests, and deployments recorded. Everything connected in one compounding graph. One query connects requirement to deployed artifact. Permanent.

The graph compounds with every push. Early on it learns which requirements govern which code. Over time it knows, and the cost of maintaining that knowledge approaches zero. An agent querying the graph before modifying a file knows what has already been decided about that file.

Why no vendor builds this

Every vendor captures what happens inside their boundary. Anthropic Managed Agents produces a full append-only event stream of Claude agent execution within their system. GitHub captures commits and CI outcomes within theirs. Each is the best possible version of within-boundary capture. The completeness of any single vendor's data makes no difference to the question no vendor can answer: what was decided across all of them, and did all of it ship correctly.

No vendor will build the cross-boundary integrity layer. Anthropic will not build the layer that gives equal weight to Cursor output alongside their own agents. GitHub will not connect Jira requirements to Claude Code session history. Cross-boundary capture requires treating every vendor's data as an equal input, which is structurally incompatible with being a vendor. Vendor-neutrality is not a feature any single vendor will ship. It is the defining property of the layer above all of them.

Why this is urgent now

Regulatory
Deployer liability is settled
The EU AI Act took full effect in February 2026. Regulatory frameworks in the US are establishing that enterprises bear full responsibility for AI-generated code deployed in their products, regardless of which tool produced it.
Volume
Output is outpacing governance
AI-assisted code has 1.7x more issues than human-written code when not paired with structured review. 70% of engineers juggle two to four tools simultaneously. More agents, more output, no persistent cross-boundary record connecting any of it.

The Git parallel

Before Git
Change history reconstructed or lost
Teams knew code changed. The record of how, when, and why required manual effort every time the question was asked. The answer disappeared when the investigation ended.
After Git
Change record is permanent infrastructure
Every change is a first-class, tamper-evident record. No reconstruction. No loss. The history compounds with every commit. Every team treats it as non-negotiable.

Idora does for the delivery chain what Git did for the codebase. The requirement, the verification, the build, and the deployment become a first-class, permanent record rather than something reconstructed after the fact or lost when the session ends.

A new layer in the stack

Continuous Integration gave teams confidence that code compiles. Continuous Delivery gave teams confidence that code deploys. Neither answers whether what shipped matched what was decided.

Continuous Integrity is the third layer. The permanent, queryable record connecting requirement to release. The category is forming. Idora is building it.

Idora runs on Idora. Every push compounds our own integrity graph.

Building with AI agents and want to run Idora against your own delivery chain? We want to hear from you.