Your Code Is Disposable.
Get Used to It.
The median line of code lives 2.4 years. AI-generated code churns out in two weeks. 45% of features are never used. Your code was always temporary — AI just made the disposal cost approach zero. The spec is the product. Everything else is a build artifact.
Code is a commodity. It always was — you just couldn’t see it because generation was expensive. Now it’s cheap. 30% of Google’s new code is AI-generated. Microsoft reports 20–30%. Meta’s CEO says half within a year. When regenerating a module takes minutes instead of months, the code isn’t the product. The specification is. The organisations that figure this out will build faster, cheaper, and more reliably than those still treating code like a cathedral. The rest will spend fortunes maintaining disposable artifacts.
Code Is a Commodity. Specification Is the Competitive Advantage.
This is not a prediction about some distant future. It is an observation about the present, extrapolated one step forward. If 30% of Google’s new code is AI-generated today — up from 25% just six months prior — and that number is climbing quarter-over-quarter, the trajectory is obvious. Code generation is becoming trivial. What remains non-trivial is knowing what to build, why to build it, and how it should behave under every conceivable condition.
Sundar Pichai announced the 30% figure on Google’s April 2025 earnings call. Satya Nadella reported 20–30% at LlamaCon. Zuckerberg predicted half of Meta’s development within a year. Anthropic’s Dario Amodei says AI will write “essentially all of the code.” Microsoft’s CTO Kevin Scott predicted 95% within five years.
These are not startup founders pitching investors. These are the CEOs of the companies that employ the largest concentrations of software engineers on earth, making public commitments on earnings calls.
CEO statements on earnings calls are marketing as much as reporting. The 30% figure at Google means AI-suggested code that engineers accepted — not autonomous generation. Acceptance doesn’t mean the code was used without review. The actual “from-scratch AI” percentage is lower. But the trend direction is unmistakable, and the rate of change is what matters more than the absolute number.
Your Code Was Always Disposable. You Just Couldn’t Afford to Admit It.
Software engineers treat code as if it were architecture — permanent structures designed to endure. The empirical reality is closer to scaffolding.
A study published in Empirical Software Engineering tracked 3.3 billion source code lifetime events across 89 revision control repositories. The finding: the median lifespan of a line of code is approximately 2.4 years. Half of every line written today will be gone or substantially rewritten within two and a half years. The function your senior engineer laboured over for a week is statistically unlikely to survive intact through the next two annual planning cycles.
This is not a failure of engineering discipline. It is a structural feature of software. Requirements change. Dependencies update. APIs evolve. Markets shift. Security vulnerabilities are discovered. The code adapts or dies. Usually it dies.
The Rewrite Is the Norm, Not the Exception
The software industry treats rewrites as extraordinary events — failures of planning, symptoms of technical debt, admissions of defeat. The data tells a different story. A McKinsey study of more than 5,000 large IT projects found they ran on average 45% over budget and 7% over time, delivering 56% less value than anticipated. More than 60% of large-scale rewrites exceed budget or timeline. The Standish Group’s CHAOS report, analysing 50,000 projects, found 66% end in partial or total failure.
These are not occasional disasters. They are the industry’s default operating mode. Software gets written, becomes unmaintainable, and gets rewritten. The question has never been whether your code will be replaced. It has always been when.
The Sunk Cost Delusion
The emotional attachment to existing code is the software industry’s most expensive cognitive bias. “We can’t throw this away — we spent two years building it.”
That’s textbook sunk cost fallacy. The two years are gone regardless. The relevant question is not what the code cost to produce but what it will cost to maintain versus replace. And in a world where AI can regenerate a substantial portion of a codebase in hours rather than months, the replacement cost is collapsing toward zero.
The emotional transition from “code as asset” to “code as disposable output” will be the hardest part for most engineering organisations. It requires abandoning an identity that has defined the profession since its inception: the idea that writing good code is the primary value a software engineer provides.
AI Is Making Your Code Even More Disposable — and the Data Proves It
If code is becoming disposable, we should expect to see it being disposed of faster. The data confirms this.
GitClear analysed 211 million changed lines of code authored between 2020 and 2024 — the largest known database of structured code change data used to evaluate code quality. The findings:
- Code churn is spiking — 7.9% of all newly added code was revised within two weeks in 2024, up from 5.5% in 2020
- Copy-paste is replacing craftsmanship — “copy/pasted” code rose from 8.3% in 2021 to 12.3% in 2024
- Refactoring is collapsing — code classified as “moved” (a proxy for refactoring) dropped from 24.1% in 2020 to 9.5% in 2024
- Duplication is exploding — duplicate code blocks rose eightfold compared to previous years
GitClear’s characterisation is precise: “AI-generated code resembles an itinerant contributor, prone to violate the DRY-ness of the repos visited.” The composition of AI-generated code is, in their words, “similar to a short-term developer that doesn’t thoughtfully integrate their work into the broader project.”
When regenerating a module takes minutes instead of months, the entire economic calculus inverts. The expensive artifact is no longer the code — it’s the knowledge of what the code should do. Google’s AI toolkit successfully generated the majority of code for internal migrations, with 80% of code modifications being AI-authored and a 50% reduction in migration time. Airbnb migrated 3,500 test files in six weeks using LLM automation — down from an estimated 1.5 years manually. When you have a good specification, AI compresses the timeline. When you don’t, AI generates confidently wrong code at unprecedented speed.
Requirements Are the Number One Killer. They Always Were. Nobody Listened.
The software industry has spent decades obsessing over code quality — frameworks, linters, testing pyramids, code reviews, pair programming, static analysis. And yet project failure rates remain stubbornly, catastrophically high.
What’s the leading cause? Not bad code. Not wrong technology choices. Not insufficient testing. Requirements.
- 39% of project failures are caused by poor requirements (Zipdo)
- 57% of failing projects cite communication breakdowns
- 70% of digital transformation failures are due to requirements issues (Info-Tech Research Group)
- 65% of Agile projects fail to deliver on time, budget, and quality (Impact Engineering)
- 54% more likely to succeed when requirements are accurately based on a real-world problem
Healthcare.gov: budgeted at $93.7 million, ballooned to $1.7 billion. Not because the engineers were incompetent — because the specification was incomplete and contradictory. The FBI’s Virtual Case File project: five years, $170 million, scrapped entirely. Shifting requirements, poor oversight, unmanageable codebase. They started over.
The hard part of building software has never been writing code. The hard part has always been figuring out what to build. The industry has spent fifty years investing disproportionately in the easy part while systematically underinvesting in the hard part. AI hasn’t changed this asymmetry. It has exposed it.
45% of Features Are Never Used
Perhaps the most damning statistic in all of software engineering: approximately 45% of features in software projects are never used. Not rarely used. Never used. Nearly half of everything the industry builds is waste.
This is not a code quality problem. The code for those features may be perfectly well-written. The problem is upstream: nobody specified correctly which features actually mattered. The specification failed to capture real user needs, and the development process built what was asked for rather than what was needed.
The Bus Factor Is a Specification Problem
When a critical engineer leaves, what actually departs is not their ability to write code — you can hire another coder. What departs is their implicit understanding of what the system should do: the undocumented business rules, the edge cases they handle from memory, the integration quirks they navigate instinctively, the stakeholder compromises they brokered and never wrote down.
In a specification-first organisation, the bus factor approaches zero for specified systems. The specification captures the engineer’s knowledge explicitly. A new engineer — or an AI system — can regenerate the implementation from the spec. The irreplaceable asset was never the person’s coding ability. It was their domain knowledge.
Specification as Code: The Paradigm Shift Nobody Wants to Make
A curious thing is happening. As AI code generation matures, specifications are becoming increasingly precise, structured, and formal. And as they do, they begin to look remarkably like code.
This convergence is not coincidental. AI performs best with precise, unambiguous instructions. Vague requirements produce vague code. Detailed specifications produce working systems. The optimal input to an AI code generator is a specification so detailed that it is functionally equivalent to the code itself.
What This Actually Looks Like
Requirements as versioned artifacts. Specifications live in version control alongside code. Same branching, merging, review, and approval processes. Every code change traces to a specification change. The spec is authoritative; the code is derived.
Specification review replaces code review as the critical gate. Is the requirement complete? Are edge cases enumerated? Are performance constraints documented? Are security requirements explicit? Is the spec unambiguous enough that two independent AI generators would produce functionally equivalent code?
Acceptance criteria are executable. Every requirement includes testable criteria. These criteria are themselves specifications — formal enough that AI can generate test suites directly from them. The test suite is the specification’s proof.
Code becomes regenerable. The ultimate test of specification quality: delete the implementation and regenerate it from the spec. If it works, your specification is the product. If it doesn’t — if there’s implicit knowledge embedded in the code that isn’t captured in the spec — your specification is incomplete.
Layer 1 — Business Specification: What the system should accomplish in business terms. Plain language. Owned by product and domain experts.
Layer 2 — Technical Specification: How the system should accomplish it. API contracts, data models, performance constraints, security requirements. Owned by engineering.
Layer 3 — Acceptance Specification: How to verify it works. Test scenarios, edge cases, benchmarks, compliance criteria. Owned jointly.
Each layer feeds AI generation differently. Business specs guide architecture. Technical specs generate code. Acceptance specs generate test suites. Together: a complete, regenerable definition of the system.
This Happened Before. The Assembly Programmers Lost.
In the 1950s, software was machine code — hand-crafted, precious, and specific to individual hardware. Programmers were valued for their ability to write efficient assembly.
Then compilers arrived. Fortran (1957) and COBOL (1959) let programmers specify intent in a higher-level language and let the compiler generate machine code. The reaction was predictable: experienced assembly programmers argued that compilers produced inferior code, that “real” programming required understanding the hardware, that abstraction was a crutch for lesser engineers.
They were right about the code quality. Early compilers produced less efficient machine code than expert assembly programmers.
They were catastrophically wrong about the economics. Compiler-generated code was good enough, and the productivity gains overwhelmed the efficiency losses. Within a decade, hand-crafted assembly became a niche skill. The specification (high-level code) replaced the implementation (machine code) as the artifact that mattered.
We are at the same inflection point. AI code generation is the compiler of our era. The specification is replacing the implementation as the artifact that matters. The generated code will be imperfect — just as compiler-generated assembly was imperfect. It will be good enough. And the productivity gains will overwhelm the quality losses.
The Talent Inversion: Why Your Best Coder Might Be Your Least Valuable Engineer
In the traditional model, the highest-value engineer writes the best code — the “10x developer” who produces clean, efficient, maintainable implementations. In the specification-first model, the highest-value engineer writes the best specs.
These are often different people.
The brilliant coder who can implement anything but struggles to document requirements becomes less valuable. The meticulous systems thinker who writes comprehensive specifications but is a mediocre coder becomes more valuable. The domain expert who understands the business deeply enough to specify edge cases becomes essential.
This doesn’t mean coding skill becomes worthless. Someone must review generated code, debug edge cases, optimise critical paths, handle the 5–10% of problems AI can’t solve. But the centre of gravity shifts. The specification writer is the architect; the code reviewer is quality assurance. The strategic bottleneck — the role that determines success or failure — moves upstream.
“Working software over comprehensive documentation” was a reasonable reaction to waterfall’s documentation pathology. But it created a new pathology: teams that cannot articulate what they are building or why. The Impact Engineering research found that 65% of Agile projects fail to deliver on time, budget, and quality — 268% higher than the failure rate for projects using more specification-intensive methodologies. When code is generated from documentation, documentation is the deliverable. The spec is not overhead. It is the product.
A Real-World Example: Email Infrastructure
Consider a domain I know intimately: high-volume email delivery. An enterprise MTA processing sixty million messages per hour makes thousands of decisions per second — routing, throttling, retry scheduling, reputation management, bounce classification. Each decision is governed by rules accumulated over years of operational experience.
In a code-centric world, these rules live in the code. A throttling algorithm embeds institutional knowledge about how Gmail responds to burst traffic. A bounce classification function encodes the difference between a soft bounce from Yahoo (retry in 30 minutes) and one from corporate Exchange (retry in 4 hours). A reputation scoring model weights dozens of signals tuned through years of production observation.
None of this knowledge is documented outside the code. The code is the specification. And when that code needs to be migrated — which, given a 2.4-year median lifespan, it inevitably will — the institutional knowledge must be extracted from the implementation rather than read from a specification.
In a specification-first world, every throttling rule, every bounce classification, every reputation weight is documented as a specification before it is implemented. The spec survives technology migrations. It survived the transition from Perl to Python to Rust. It will survive the transition from Rust to whatever comes next. The code is the scaffolding. The spec is the building.
The Competitive Moat Isn’t Code Anymore
If AI can generate equivalent code from the same specification, code is not a differentiator. Two companies with the same spec will produce functionally equivalent systems. The moat must be in the specification — in the domain knowledge that produces better specs than competitors can produce.
This is, counterintuitively, a more durable moat than code. Code can be reverse-engineered, decompiled, or functionally replicated. Specifications that encode deep domain expertise cannot be replicated without the domain expertise itself. The spec is the crystallisation of knowledge. The code is its shadow.
The Regenerability Test: Try This on Monday
The simplest test of specification completeness: delete the code and regenerate it.
If the regenerated system passes all acceptance criteria without human intervention in the code, your specification is complete. If it fails — if there are behaviours embedded in the old code that aren’t captured in the spec — you’ve found a specification gap. Fix the spec, not the code.
This quarter: Pick one upcoming feature. Write the spec before any code. Include business requirements, technical constraints, API contracts, error handling, and executable acceptance criteria. Generate the implementation from the spec. Measure: how much worked on the first attempt? Where it failed, was the failure in the generation or the specification?
This half: Establish specification review as a formal process alongside code review. Track spec defects with the same severity taxonomy. A missing edge case is a critical defect. A vague performance requirement is high-severity.
This year: Run the regenerability test on one critical module. Document its specification completely. Delete the implementation. Regenerate from the spec. The gaps are your implicit knowledge debt. This exercise will be uncomfortable and revealing.
Addressing the Inevitable Objections
“AI-generated code isn’t good enough for production.” GitClear’s data supports this today. But early compilers also produced inferior code. The question is whether AI is improving faster than “good enough” is rising. Based on quarter-over-quarter improvements from Google, Microsoft, and Meta — yes.
“Specifications are too rigid for agile development.” This confuses specification with waterfall. Changing a spec and regenerating is faster than refactoring existing code. The problem with waterfall specs wasn’t that they existed — it was that they were written once and never updated.
“You can’t specify everything upfront.” Correct. Don’t try. Write what you know, generate, test, discover what you missed, update the spec, regenerate. The specification grows incrementally. The difference: knowledge is captured in the spec rather than embedded invisibly in the code.
“This just moves the bottleneck.” Yes. Deliberately. It moves the bottleneck from code production — which AI is commoditising — to specification quality, which requires irreducibly human skills. Moving the bottleneck to work humans do best and AI does worst is the rational response.
“Our legacy codebase can’t be regenerated from specs.” Almost certainly true today. The migration path is incremental: for new features, spec first. For legacy, extract specs from existing code (AI is excellent at this). Over time, coverage increases. Strangler fig pattern applied to specification.
What This Paper Gets Wrong (Probably)
Intellectual honesty requires acknowledging where this argument rests on inference rather than proof:
- No controlled study exists of specification-first AI development at enterprise scale. The argument extrapolates from Google and Airbnb migrations and general project failure research.
- CEO earnings call statistics are marketing. The 30% figure is self-reported by executives with incentives to demonstrate AI adoption.
- The specification bottleneck may be harder than anticipated. If writing precise specs is harder than writing code, the net productivity gain may be smaller than projected.
- Domain complexity varies dramatically. This applies most clearly to business applications. Safety-critical and real-time embedded systems may not benefit the same way.
Your code is disposable. It always has been. The difference is that AI has made the disposal cost approach zero. What remains — the irreducible kernel of value — is knowing what to build and specifying it precisely enough that any system, human or artificial, can build it correctly. The spec is the product. The code is just the latest rendering.
References
Spinellis, D. (2021). Software evolution: the lifetime of fine-grained elements. Empirical Software Engineering, 26, Article 108.
GitClear. (2025). AI Copilot Code Quality: 2025 Look Back at 12 Months of Data.
GitClear. (2024). Coding on Copilot: Data Suggests Downward Pressure on Code Quality.
Pichai, S. (2025). Alphabet Q1 2025 Earnings Call. April 24, 2025.
Nadella, S. (2025). Fireside chat with Mark Zuckerberg, LlamaCon. Meta Platforms.
Standish Group. (2020). CHAOS 2020 Report: Beyond Infinity.
McKinsey & Company. (2020). Delivering Large-Scale IT Projects On Time, On Budget, and On Value.
CISQ. (2020). The Cost of Poor Software Quality in the US: A 2020 Report.
BCG. (2020). Digital Transformation Failure Rates.
Zipdo. (2023). Essential Software Project Failure Statistics.
Info-Tech Research Group. (2023). Digital Transformation Failure Analysis.
Ali, J. (2024). Impact Engineering: How to Build Software That Actually Works.
Google DORA Research. (2025). State of AI-assisted Software Development Report.
Thoughtworks. (2025). Technology Radar: Complacency with AI-Generated Code.
Amazon. (2024). AI-assisted modernisation of 30,000+ Java applications.
Airbnb Engineering. (2024). LLM-powered test file migration: 3,500 files in six weeks.
Scott, K. (2025). Predictions on AI code generation. 20VC Podcast.
Amodei, D. (2025). Predictions on AI code generation.
Zuckerberg, M. (2025). LlamaCon keynote. Meta Platforms.
Still treating your code like a cathedral?
We build specification-first, with AI agents that regenerate implementations from formal specs — reviewed adversarially, tested at 100% coverage, and gated at 100/100. The code is disposable. The process is not.