← Back to Blog

The Debt You Don’t See Coming

Technical debt was a known mess. Comprehension debt is invisible until it isn’t — and it’s accumulating quietly in codebases everywhere.

Software developers spent two decades managing technical debt. Shortcuts taken to hit a deadline. Library updates skipped to avoid breaking changes. Systems that only one or two people truly understood, which, ironically, made those people irreplaceable and very well paid. It was messy, but it was a known mess.

Now a different kind of debt is accumulating quietly in codebases everywhere, and most teams have no idea it is happening. It is called comprehension debt, and it is far more dangerous than anything that came before it.

What Comprehension Debt Actually Is

Technical debt lives in the code. Comprehension debt lives in the gap between how much code your system contains and how much of it any human on your team actually understands.

The irony is that AI-generated code often looks excellent on the surface. It follows conventions. It is clean and well-structured. It passes review almost too easily. But underneath, the team’s collective understanding of the system is quietly rotting. You can end up with thousands of lines of logic that no one can fully explain, and no one thought to question, because it looked fine.

The False Productivity Trap

A recent study compared two groups of engineers learning a new library. One group used AI to generate solutions directly. The other used AI as a thinking partner, asking questions and learning concepts rather than just accepting outputs.

Both groups completed their tasks in roughly the same time. On the surface, identical productivity.

Then they were tested on debugging tasks.

The group that had let AI generate the code performed significantly worse. And this is the part that matters: debugging is where software development actually happens. Not in the happy path. Not in the clean initial implementation. In the 3am incident. In the regression that appeared two sprints after a refactor. In the edge case that only shows up at scale.

If you cannot explain why a piece of code exists, you cannot fix it when it breaks. That is not a philosophy. That is a practical constraint.

The Bottleneck That Was Actually a Feature

Writing code was slow. Reviewing code was even slower. For years, developers complained about this. It felt like friction.

It was not friction. It was comprehension being forced into the process.

When a pull request sat in review, engineers read each other’s code. They asked questions. They built a shared mental model of the system. The slowness was doing work that no one had named or valued, because it had always just been there.

AI has removed that bottleneck entirely. Code is now produced faster than any team can meaningfully review it. And because it looks clean, the assumption is that it is correct and that understanding it is someone else’s problem, deferred to later — a later that keeps getting pushed back.

The Incentive Problem Nobody Talks About

The companies building these AI tools have a direct financial interest in you generating as much code as possible. More generation means more tokens. More tokens means more revenue. Some industry leaders are openly suggesting that high-performing engineers should be burning hundreds of thousands of dollars in AI compute every year. AI reviewing code written by another AI has been presented as a solution rather than a symptom.

This is not a software quality story. It is a customer acquisition story.

You are being encouraged to move fast and generate volume, not because it serves your systems or your team, but because it serves a business model built on token consumption.

What the Data Actually Shows

The latest DORA report captures a contradiction that should be alarming to anyone managing an engineering organisation. Around 90 percent of developers now use AI daily. More than 80 percent report feeling more productive. And yet a substantial proportion of those same developers admit they do not trust the code the AI produces.

The metrics tell the same story. Throughput is up. Stability is down. Knowledge retention is declining. Costs are rising. Teams are shipping more changes and creating more incidents. Management sees impressive velocity charts while the actual structural integrity of the software quietly degrades.

The cycle looks like this: move fast and create problems, use AI to fix those problems quickly, generate new problems in the process, repeat. It looks like progress from the outside. It is not.

The Specification Fantasy

A common response to all of this is to suggest that humans should focus on specifications while AI handles implementation. Define what the system should do. Let the machine work out how.

This sounds reasonable until you have actually built software.

Every real implementation involves thousands of small decisions that are not captured in any specification. Which data structure to use. How to handle the error cases that the spec never mentioned. What performance trade-off makes sense for this particular context. How this component needs to behave when four other components it depends on fail in sequence.

These decisions shape the system. If you delegate them entirely, you do not have a specification. You have a wish. And you have no understanding of what you actually got in return.

AI as a Multiplier, Not a Leveller

The most useful frame for thinking about AI in engineering is not that it makes everyone equally capable. It acts as a multiplier of whatever capability already exists.

Teams with strong fundamentals, clear architecture, and genuine shared understanding of their systems benefit enormously from AI. It removes the tedious and repetitive parts of the work. It accelerates the things they already know how to do.

Teams with fragmented knowledge, accumulated shortcuts, and unclear ownership get something different. AI scales the mess. It produces more of the thing they were already struggling with, faster, and at a size that becomes genuinely unmanageable.

The Ten-Year Risk

The economic model behind today’s AI tools depends on companies losing money to acquire users. Pricing is artificially low. The goal is dependency.

If the software industry stops developing junior engineers because AI appears to do their job, the pipeline of talent disappears. If senior engineers stop debugging and start delegating, the critical thinking skills that made them senior atrophy. You end up with large, sprawling systems that no human fully understands, maintained by tools that have raised their prices to reflect the fact that you now have no alternative.

That is not a dystopia. That is a logical endpoint of a set of incentives that are clearly visible right now.

What to Actually Do

None of this requires stopping the use of AI. It requires using it with the same discipline you would apply to any powerful tool.

If AI generates a solution, you should be able to explain every line before it ships. Not summarise it. Explain it. If you cannot, you do not understand it, and you are not ready to be responsible for it in production.

Code review standards should not drop because volume has increased. If anything, the pressure to review more carefully rises when you have less certainty about how the code was produced.

The goal was never to see how many tokens you could consume. The goal was always to build systems that work, that your team understands, and that you can maintain and improve over time.

Speed is only valuable when you know where you are going. If you are generating enormous amounts of code faster than anyone can understand it, you are not moving quickly. You are just building a larger cliff to fall off later.