Introduction
In just a few weeks, the AI conversation has moved from “interesting tools” to operating model disruption. Matt Shumer describes it as a “February 2020 moment”—not because the events are comparable, but because the speed is. Sometimes change doesn’t arrive gradually; it arrives in a short burst, and the organizations operating with last month’s assumptions suddenly discover they’re behind.
Brian Solis makes the leadership version of the same argument: the bigger story isn’t capability hype—it’s who is closing the AI gap: the widening distance between what frontier AI can do and what organizations actually convert into measurable value.
That’s the core message of this post:
Most companies aren’t “behind on AI.” They’re behind on the organizational shape required to keep up.
Are we in an AI Bubble?
The “bubble” narrative made more sense when spending was dominated by training: huge clusters, huge one-off runs, front-loaded cost. But the economic center of gravity is shifting to the inference era, where models run continuously—often for fleets of agents, 24/7.
Nate Jones explains the key dynamic: training is expensive but “bursty”; inference is cheaper per unit but “never ever stops,” and agents multiply demand dramatically. That changes the investment story:
Agentic workflows multiply inference calls
(contract review, auditing, coding, compliance) and compound demand across the enterprise.
Supply is constrained
(memory, DRAM, HBM lead times), and large buyers can lock up allocation.
TrendForce projections
cited suggest memory costs could add 40–60% to inference infrastructure in H1 2026, with effective inference costs potentially rising sharply within ~18 months.
If AI becomes a default interface layer for work, inference demand won’t “normalize.” It compounds.
Executive implication: Treat AI compute—and the ability to route workloads across providers/models—as a strategic dependency (like energy or logistics), not as a typical IT line item.
The pace of change is measured in weeks, not months or years
Shumer’s “three weeks” analogy is a timing point: progress can look stable, then your mental model becomes outdated overnight. His claim is that AI is entering a phase of sharp capability steps, arriving faster than most organizations can absorb.
What that looks like:
Thresholds, not increments
The shift isn’t “5% better.” Models increasingly plan, decide next steps, and complete multi-step work with less supervision—making whole categories of tasks suddenly feasible.
Work patterns change immediately
When AI covers more of an end-to-end workflow, teams don’t just speed up; they shorten cycles, run more parallel experiments, and ship more frequently.
The bottleneck moves to leadership and fluency
Solis’ point: access to models matters, but the limiting factor becomes whether leaders can recognize what changed, redesign workflows, and convert capability into customer value.
It feels sudden
Nate Jones describes a narrative flip that happened “in weeks” because what teams can do changes quickly.
Executive implication: If the frontier advances monthly (sometimes faster), annual planning cycles and quarterly “AI roadmap reviews” become structurally too slow. Winners aren’t only those with better technology—they’re those with operating habits that detect change early, decide fast, and redeploy resources without friction.
Real world breakthroughs that changed the product lifecycle mentality
Example A — Anthropic: Claude Co-work built in 10 days (and why that matters)
A standout example is Claude Co-work, shipped in 10 days after Anthropic observed developers using the underlying coding agent for non-coding work (e.g., organizing receipts into spreadsheets).
Two strategic points:
This isn’t only “faster delivery.” It signals a new rhythm: build, instrument, observe, ship—where traditional gating becomes the slowest part of the system.
Example B — OpenAI: AI building the next AI (compounding the curve)
OpenAI documentation has claimed GPT-5.3 Codex was “instrumental in creating itself,” used to debug training, manage deployment, and diagnose evaluations.
Whatever your interpretation, the organizational takeaway is practical: iteration loops compress when tools accelerate their own improvement—making “weeks-not-years” a realistic baseline.
Example C — Amazon / AWS: Cairo and the move to spec-first discipline
Nate Jones highlights AWS launching “Cairo,” framed not as faster code generation but as forcing testable specifications before generation, because error rates and review burden became material.
This reveals the hidden shift:
Executive implication: AI doesn’t eliminate process—it relocates it upstream (clarity, definition, verification) and downstream (distribution, adoption, governance).
“Your org setup is wrong” — what that actually means
Most org charts, funding models, and governance were designed for a world where:
AI flips the ratio. As Nate Jones put it:
“The meeting to discuss a feature can take longer than building the feature; the PRD can take longer than the prototype.”
He uses a manufacturing analogy: remove a bottleneck and it doesn’t disappear—it moves. With AI reducing execution constraints, bottlenecks shift to clarity, ambition, distribution, and relationships.
Solis echoes the leadership point: don’t hand people a tool and call it transformation—build fluency and an operating model that closes the capability gap.
So “wrong setup” often looks like:
AI sits inside IT as a project, not inside the business as an operating capability
Governance assumes multi-year stability while the component stack shifts quarterly
Teams optimize for output volume, not defined intent → verified outcome → captured value
Leaders outsource understanding—and confuse activity with progress
AI Fluency: why you must measure it continuously
In “Digital Fluency vs Digital Transformation,” Harry Mamangakis argued transformation is not an end state; you must continuously measure your fluency. That’s even more true for AI: it’s not a one-time plan—it’s an ongoing capability you must practice, measure, and renew.
Solis argues adoption moves at the pace of leadership and culture, framing “Cognitive Darwinism” as a reimagination of work and leadership.
Fluency can be measured across three dimensions: Technology, Value Delivered, and Business Agility. Your position across these axes shows where to invest next.
Here’s an executive-ready AI Fluency Scorecard to review quarterly (and instrument monthly):
Goal:
Swap models/tools/infrastructure without re-platforming the business.
Signals:
- Lock-in risk (if your vendor falls behind, do you fall behind?)
- Time to switch a core model/component (days/weeks)
- % workflows behind a stable AI layer (routing, evals, logs, guardrails)
- Ability to run hybrid or multi-provider when cost/supply shifts
This isn’t bureaucracy. It’s how you avoid operating on an expired map.
What to do next: a pragmatic operating model shift
Your AI strategy cannot be a static roadmap. In a world of frequent capability jumps, the winning approach is an operating model that continuously absorbs change, innovates into measurable workflows, and (when ready) disrupts how you deliver value.
Absorb:
Make AI fluency a leadership standard. Build shared vocabulary, define value zones, and establish guardrails (security, compliance, evaluation, observability).
Innovate:
(memory, DRAM, HBM lead times), and large buyers can lock up allocation.
Disrupt:
Once you can ship reliably, push beyond internal efficiency into new customer experiences, new service levels, new products, and new delivery economics.
Run an AI operating cadence:
This is the minimum structure required to move at the speed the environment demands.
Conclusion: Adapt—or become irrelevant by default
Solis puts it bluntly: AI won’t wait for an organization’s comfort with change; it rewards companies that build fluency and redesign leadership, decision-making, and value delivery.
The winners won’t be the ones with the most pilots. They’ll be the ones that redesign how decisions are made, how work is specified, and how learning compounds—fast enough to keep up with an accelerating curve.
If your org isn’t changing already, the market won’t wait for your next planning cycle.
References:
"Something Big is Happening"
2/9/26, Matt Shummer
"Something big is happening with AI, but the bigger story is who is closing the AI gap"
12/2/26, Brian Solis
"The AI Bubble Died"
14/2/26, Nate Jones
"Digital Transformation or Digital Fluency"
5/5/2020, Harry Mamangakis
"Igniting AI Transformation: How To Future-Proof Your Company Against Cognitive Darwinism"
9/11/25, Brian Solis
"Why $650 Billion in AI Spending ISN'T Enough. The 4 Skills that Survive and What This Means for You"
14/2/2026, Nate Jones
"Claude Opus 4.6: The Biggest AI Jump I’ve Covered –It’s Not Close"
11/2/2026, Nate Jones





