We are betting the next two years of engineering strategy on a single variable: the slope of the AI progress curve.

If it’s exponential, your junior hiring plan is dead. If it’s a plateau, you’re overpaying for hype. If it’s linear, you have time—but not as much as you think.

Here’s the framework I use to hedge between these futures.


Why This Matters Now: The Coordination Tax

The hidden constraint isn’t AI capability—it’s verification throughput.

Here’s the math: An LLM generates a 500-line PR in 30 seconds. Your senior dev needs 45 minutes to verify the logic, edge cases, and security implications. Run 10 AI agents overnight? You’ve created 7.5 hours of review work before standup.

This is the coordination tax. It doesn’t scale.

The infrastructure gap makes it worse:

  • Versioned state is immature — We’re asking AI to regenerate systems, but we can’t reliably diff or rollback AI-authored changes. If an AI rewrites your payment module and something breaks, can you recover? Most teams can’t.
  • Token economics hit walls — ā€œRegenerate everythingā€ approaches that work on toy projects become economically insane at scale.
  • Legal frameworks lag — Who’s liable when an AI agent deploys a vulnerability at 3am? Nobody knows yet.

This is why I’ve revised my original timeline predictions. The direction was right; the speed was wrong.


The Three Futures

1. Exponential (Kurzweil/Amodei)

Thesis: AI capabilities double every 6-12 months.

Evidence Interpretation
183% benchmark improvement in 12 days (OpenAI, Apr 2024) Narrow benchmarks can spike without general capability gains. But the trajectory is real.
50% job automation within 5 years (Amodei testimony) High end of credible estimates. Assumes infrastructure catches up.

If true: Junior roles are already evaporating. Implementation becomes commodity by 2027.

Your Wednesday: Your team doesn’t write code; they spend 8 hours red-teaming 50,000 lines of agent-generated logic.


2. Linear (Historical Baseline)

Thesis: Progress is steady, not explosive. Each improvement costs proportionally more.

Evidence Interpretation
15% productivity uplift (Thoughtworks Looking Glass 2026) That’s 15% in time-to-delivery, not lines of code. We’re clearing backlogs faster, not replacing devs.
90 nuclear plants needed by 2030 (Goldman Sachs) AI training is power-hungry. Physics doesn’t care about VC timelines.

If true: Double all timelines. Organizations have time to adapt. Retraining might actually work.

Your Wednesday: You still use JIRA. Your ā€œjuniorā€ is a GPT-5 wrapper that handles 40% of the unit tests. You still debug by hand.


3. Plateau (Diminishing Returns)

Thesis: Easy tasks are solved; hard tasks remain hard.

Evidence Interpretation
Coding assistants struggle with multi-step reasoning Current LLMs fail on problems requiring 5+ logical steps. Architecture is mostly 5+ step problems.
ā€œAI-generated code can introduce vulnerabilitiesā€ (Thoughtworks) Security requires paranoid edge-case reasoning. LLMs optimize for the happy path.

If true: AI becomes accelerator, not replacement. The ā€œ50% that staysā€ is more like ā€œ80% that adapts.ā€

Your Wednesday: You’ve banned AI from the core payment engine because the hallucination rate on edge cases hasn’t dropped in 12 months.


My Revised Bet

The truth is domain-specific: exponential on narrow tasks, linear on system adoption, plateau on the hardest problems.

Prediction Exponential Linear Plateau My Bet
Boilerplate devs displaced āœ… 2025 āœ… 2025 āœ… 2025 āœ… 2025
ā€œOrchestrate agentsā€ becomes the job āœ… Late 2025 🟔 Late 2026 šŸ”“ Stalled 🟔 Mid 2026
AI rewrites whole modules āœ… Mid 2026 šŸ”“ 2028 šŸ”“ Never šŸ”“ 2027-28
Autonomous overnight implementation āœ… Early 2027 šŸ”“ 2029-30 šŸ”“ Never šŸ”“ 2028-29

Monday Morning Tactics

Whatever future materializes, here’s what to do tomorrow:

1. Contract-Driven Development
If AI writes the implementation, humans must be elite at defining the contract. Invest in OpenAPI and Protobuf skills. If you don’t have a strict spec, an agentic coder will invent its own types and your CI/CD will explode.

2. Test-Harness First
Stop writing code; write exhaustive test suites that ā€œtrapā€ the AI into the correct implementation. Comprehensive tests let AI iterate until it passes. Weak tests mean debugging hallucinations.

3. Modular Decoupling
AI struggles with monoliths. If a service takes more than 10 minutes for a human to context-shift into, an AI will hallucinate the state. Decouple now so AI can ā€œeatā€ one small service at a time.

4. Audit Everything
If a human can’t audit your system, an AI can’t orchestrate it. Invest in observability, version control for data (not just code), and reproducible builds.


The Honest Conclusion

Direction: Right. We’re moving from implementation to intent.

Speed: Wrong. The transition will be messier, slower, and more domain-specific than the hype suggests.

The smart move: Bet on linear with optionality for exponential.

Don’t refactor your whole stack for AI agents yet. Refactor for auditability. Build skills that matter in all three futures—contract design, systems thinking, architectural judgment.

The signals are clarifying right now—early 2026 is the inflection point. Hedge by building systems that are legible to both humans and machines.


This is a follow-up to my 24-month timeline prediction, revised after reading Thoughtworks Looking Glass 2026 and confronting the infrastructure constraints I’d underestimated.