Three Futures: Exponential, Linear, or Plateau?
We are betting the next two years of engineering strategy on a single variable: the slope of the AI progress curve.
If itās exponential, your junior hiring plan is dead. If itās a plateau, youāre overpaying for hype. If itās linear, you have timeābut not as much as you think.
Hereās the framework I use to hedge between these futures.
Why This Matters Now: The Coordination Tax
The hidden constraint isnāt AI capabilityāitās verification throughput.
Hereās the math: An LLM generates a 500-line PR in 30 seconds. Your senior dev needs 45 minutes to verify the logic, edge cases, and security implications. Run 10 AI agents overnight? Youāve created 7.5 hours of review work before standup.
This is the coordination tax. It doesnāt scale.
The infrastructure gap makes it worse:
- Versioned state is immature ā Weāre asking AI to regenerate systems, but we canāt reliably diff or rollback AI-authored changes. If an AI rewrites your payment module and something breaks, can you recover? Most teams canāt.
- Token economics hit walls ā āRegenerate everythingā approaches that work on toy projects become economically insane at scale.
- Legal frameworks lag ā Whoās liable when an AI agent deploys a vulnerability at 3am? Nobody knows yet.
This is why Iāve revised my original timeline predictions. The direction was right; the speed was wrong.
The Three Futures
1. Exponential (Kurzweil/Amodei)
Thesis: AI capabilities double every 6-12 months.
| Evidence | Interpretation |
|---|---|
| 183% benchmark improvement in 12 days (OpenAI, Apr 2024) | Narrow benchmarks can spike without general capability gains. But the trajectory is real. |
| 50% job automation within 5 years (Amodei testimony) | High end of credible estimates. Assumes infrastructure catches up. |
If true: Junior roles are already evaporating. Implementation becomes commodity by 2027.
Your Wednesday: Your team doesnāt write code; they spend 8 hours red-teaming 50,000 lines of agent-generated logic.
2. Linear (Historical Baseline)
Thesis: Progress is steady, not explosive. Each improvement costs proportionally more.
| Evidence | Interpretation |
|---|---|
| 15% productivity uplift (Thoughtworks Looking Glass 2026) | Thatās 15% in time-to-delivery, not lines of code. Weāre clearing backlogs faster, not replacing devs. |
| 90 nuclear plants needed by 2030 (Goldman Sachs) | AI training is power-hungry. Physics doesnāt care about VC timelines. |
If true: Double all timelines. Organizations have time to adapt. Retraining might actually work.
Your Wednesday: You still use JIRA. Your ājuniorā is a GPT-5 wrapper that handles 40% of the unit tests. You still debug by hand.
3. Plateau (Diminishing Returns)
Thesis: Easy tasks are solved; hard tasks remain hard.
| Evidence | Interpretation |
|---|---|
| Coding assistants struggle with multi-step reasoning | Current LLMs fail on problems requiring 5+ logical steps. Architecture is mostly 5+ step problems. |
| āAI-generated code can introduce vulnerabilitiesā (Thoughtworks) | Security requires paranoid edge-case reasoning. LLMs optimize for the happy path. |
If true: AI becomes accelerator, not replacement. The ā50% that staysā is more like ā80% that adapts.ā
Your Wednesday: Youāve banned AI from the core payment engine because the hallucination rate on edge cases hasnāt dropped in 12 months.
My Revised Bet
The truth is domain-specific: exponential on narrow tasks, linear on system adoption, plateau on the hardest problems.
| Prediction | Exponential | Linear | Plateau | My Bet |
|---|---|---|---|---|
| Boilerplate devs displaced | ā 2025 | ā 2025 | ā 2025 | ā 2025 |
| āOrchestrate agentsā becomes the job | ā Late 2025 | š” Late 2026 | š“ Stalled | š” Mid 2026 |
| AI rewrites whole modules | ā Mid 2026 | š“ 2028 | š“ Never | š“ 2027-28 |
| Autonomous overnight implementation | ā Early 2027 | š“ 2029-30 | š“ Never | š“ 2028-29 |
Monday Morning Tactics
Whatever future materializes, hereās what to do tomorrow:
1. Contract-Driven Development
If AI writes the implementation, humans must be elite at defining the contract. Invest in OpenAPI and Protobuf skills. If you donāt have a strict spec, an agentic coder will invent its own types and your CI/CD will explode.
2. Test-Harness First
Stop writing code; write exhaustive test suites that ātrapā the AI into the correct implementation. Comprehensive tests let AI iterate until it passes. Weak tests mean debugging hallucinations.
3. Modular Decoupling
AI struggles with monoliths. If a service takes more than 10 minutes for a human to context-shift into, an AI will hallucinate the state. Decouple now so AI can āeatā one small service at a time.
4. Audit Everything
If a human canāt audit your system, an AI canāt orchestrate it. Invest in observability, version control for data (not just code), and reproducible builds.
The Honest Conclusion
Direction: Right. Weāre moving from implementation to intent.
Speed: Wrong. The transition will be messier, slower, and more domain-specific than the hype suggests.
The smart move: Bet on linear with optionality for exponential.
Donāt refactor your whole stack for AI agents yet. Refactor for auditability. Build skills that matter in all three futuresācontract design, systems thinking, architectural judgment.
The signals are clarifying right nowāearly 2026 is the inflection point. Hedge by building systems that are legible to both humans and machines.
This is a follow-up to my 24-month timeline prediction, revised after reading Thoughtworks Looking Glass 2026 and confronting the infrastructure constraints Iād underestimated.