The Compounding Moat

Idora · April 2026


The Argument
MechanismEvery push makes the graph denser and more efficient to operate, across both the verification and execution evidence streams.
VelocityIdora sits inside the execution path, so the graph compounds on every push rather than on scan schedules or user activity. Switching-cost grade arrives in months. At six weeks of pushes the MAPS_TO graph has accumulated routing signal that takes a competitor the same period of pushes to reproduce.
MoatBy month 6, the graph holds an integrity history no engineer on the team possesses and that cannot be rebuilt from scratch.
The Mechanism

Two streams. One shared key. Denser with every push.

Most tools that track software integrity sit outside the development pipeline, running on schedules or waiting to be invoked. Idora instruments the push itself. This placement is the reason the graph compounds faster than adjacent products.

When code is pushed, Idora captures two things independently. It checks the changed files against their governing requirements, which it reads from Jira tickets, markdown specs, Kiro specs, and regulatory documents. Requirements are decomposed into atomic seams, each of which maps to one or more files. The seam is the unit of verification and the unit of compounding. It also records what was built, tested, and deployed. Both observations write into the same graph, connected through the code files they share. The file verified against a requirement is the same file the build consumed. One hop connects requirement verification to execution evidence, a join no build provenance or AppSec tool makes.

Push 1Month 6
Requirement checksAI inference establishes each connection for the first timeDirect lookup for confirmed connections. Marginal cost near zero for established connections.
Pipeline provenanceFirst build, test, and deploy receipts formEvery shipped artifact traceable to the files that produced it, across commits.
Violation historyFirst requirement violations surfacedHistory of what broke, when, and which deploy resolved each one.

Illustrative projections based on the flywheel model. Actual figures vary by repo size, push frequency, and spec quality.

Within the verification stream, a specific efficiency builds over time. The first time Idora checks a requirement against a code file, it uses an AI model to establish the connection. By the tenth check, the graph routes directly, with the marginal cost of each subsequent verification, for established connections, approaching zero.

The graph topology does not change between push 1 and month 6. The evidence density does. The graph grows not by adding new types of nodes but by accumulating more proof on the same structure, with each proof reducing the cost of the next one.

Requirements change over time. The graph handles this correctly. When a requirement source is re-ingested after an update, unchanged seams retain their full routing signal at zero additional cost. Similar seams inherit seeded routing from prior versions. Only genuinely new seams cold-start. The compounding is resilient to requirement evolution, not fragile to it.

The Velocity

The trigger sets the ceiling

Every infrastructure graph compounds on a trigger. The trigger determines how fast evidence accumulates and sets a hard ceiling on compounding velocity. A fast-moving engineering team with meaningful AI code share generates compounding events continuously, at a cadence no scan schedule or usage pattern can match.

Scan-triggered
AppSec and dependency graphs
Daily or weekly

Evidence accumulates in cycles. Bounded by scan cadence regardless of how frequently the team ships.

Cycode · Snyk · Apiiro
Behavior-triggered
Enterprise knowledge graphs
User query dependent

Requires active usage to compound. A team that has not used the product has not generated evidence.

Glean · Guru · Notion AI
Idora · Push-triggered
Scales with development velocity
every push

No schedule and no adoption required. As AI code share drives push frequency up, so does the compounding rate.

The compounding advantage compounds further for multi-vendor teams. A team running Claude Code, Cursor, and GitHub Copilot has one Idora graph accumulating evidence across all three simultaneously. No single-vendor product replicates this: Anthropic accumulates within their platform boundary, GitHub within theirs. Idora accumulates across all of them. As AI agent adoption increases, push frequency rises with it and the compounding rate accelerates. a16z published hard data in April 2026: 29% of the Fortune 500 are live paying customers of AI coding startups, with the majority of that adoption in code. The population for whom the graph is actively compounding today is not theoretical. It is documented.

Week 1

Graph is structurally complete but sparse. Every connection between a requirement and a code file is being established for the first time. Each subsequent push builds on this foundation.

Month 2

Established connections route directly, bypassing AI inference. Requirement-to-file links accumulate confidence. Violation patterns begin to emerge across the deployment history.

Month 6

Verification cost at established connections has dropped to near zero. Every deployed artifact is traceable to its source files and the requirements those files were checked against. No other system assembles this evidence in a single queryable structure.

The graph continues compounding beyond month 6. Comparable products reach switching-cost grade in 12 to 18 months at median adoption.
The Moat

What starting over costs

A team starting fresh six months later can replicate the graph's structure. They cannot replicate its history. Three things accumulate in the graph that exist nowhere else: the history of which requirements were broken and when, the chain showing which deploy resolved each violation, and the confidence built from hundreds of confirmed connections between requirements and code.

A later competitor getsWhat they cannot get
The same graph structure from day oneYour violation history: which requirements broke and when
All current requirement-to-code connectionsYour resolution lineage: which deploy resolved each one
Future compounding from their first pushRouting confidence built from your pushes

A competitor can start building today. The violation history and resolution lineage from your first month cannot be reconstructed, and by definition they begin behind and remain behind. This knowledge is not in the code review history or in documentation. It exists because every push was captured in a system designed to hold it, from inside the pipeline at the moment each push happened, not assembled after the fact.

The compounding graph is not a passive audit trail. Before a coding agent (Claude Code, Cursor, or any AI coding tool) writes a line of code, it can query the graph for the full decision history of every file it is about to touch: prior requirements, prior conformance outcomes, prior failure patterns, what prior sessions tried before arriving at the current state. An agent writing from six months of accumulated institutional memory produces better results than an agent starting from zero. A team that switches loses that pre-flight advantage immediately and permanently. The graph is not just hard to leave because of the history it holds. It is hard to leave because every future session actively depends on that history.

The switching cost is not migration complexity. It is the permanent loss of that history. By month 6, the graph holds an accurate picture of the codebase’s integrity history that no engineer on the team possesses and that no later entrant can replicate. It cannot be exported, approximated, or rebuilt. The organization that starts today is building an asset that compounds continuously and widens with every push. Every push forward is a push further ahead.