Saturday, March 28, 2026

AI First or Be Left Behind: The Real Question Is Not Adoption, It’s Consumption

There are moments in technology where the shift is not in what we use, but in how deeply we use it. Those moments are uncomfortable, because they force a different kind of question—not “Are we adopting this?” but “Are we using enough of it to stay relevant?”

I was reminded of this while revisiting two very different but connected ideas.

Andrew Grove
once articulated a simple but unsettling principle: the leaders who survive are not the most confident ones, but the ones who sense a shift early and act before the evidence becomes obvious. That instinct—almost paranoia—is what forces change before it is too late.

Decades later, in a completely different context, Jensen Huang made a statement that at first sounds exaggerated, almost provocative. He suggested that if a highly paid engineer is not consuming a significant amount of AI tokens, it should raise concern. The comparison he draws is not subtle: working without AI in today’s world is similar to a chip designer refusing to use modern design tools.

Strip away the rhetoric, and what remains is a powerful signal.

The shift in AI is not about access.
It is about consumption intensity.

The Shift: From Having AI to Using AI at Scale

Most organizations today are already “using AI.”

Teams generate content. Developers use copilots. Analysts experiment with models. Presentations look sharper, code is written faster, emails sound more refined.

On the surface, adoption looks healthy.

But this is where a dangerous illusion begins.

Because the real shift is not:

Do you have AI?

It is:

How much AI are you actually consuming to amplify your work?

In earlier technology cycles, tools improved productivity linearly. In AI, usage directly correlates with capability. The more you use, the more you learn, the more you refine, and the more leverage you create.

Which brings us to a slightly uncomfortable but necessary idea:

Under-using AI is not efficiency. It is underperformance.

Why Token Consumption Is the New Metric


Token consumption may sound like a technical or billing concept, but it is actually a proxy for something much deeper:

  • How much intelligence are you leveraging?

  • How much experimentation are you doing?

  • How much iteration are you allowing?

  • How deeply is AI embedded in your workflow?

An engineer who uses AI occasionally is improving efficiency.
An engineer who uses AI extensively is redefining how work is done.

That difference compounds quickly.

But there is a second-order effect that is even more important.

Where does the value of that consumption go?

If all token consumption flows to external platforms, then while productivity improves internally, economic value accumulates externally. Over time, this creates a dependency loop—where capability increases, but control does not.

This is where the conversation shifts from productivity to strategy.

The Strategic Layer: Why Sovereign AI Matters

The idea of sovereign cloud and in-house models is often misunderstood as a defensive or political stance. In reality, it is an economic and architectural necessity in an AI-driven world.

When organizations build or participate in sovereign AI ecosystems:

  • Token economics can be partially internalized

  • Data remains within controlled boundaries

  • Models evolve with contextual relevance

  • Costs become predictable and optimizable

  • Most importantly, learning stays within the organization

Because every AI interaction is not just usage—it is training, refinement, and feedback.

If that loop sits outside, the intelligence compounds elsewhere.

If it sits inside, the advantage compounds internally.

This is the real reason why sovereign infrastructure is not optional in the long term.

What an AI First Strategy Actually Means

“AI First” has become a popular phrase, but in most organizations, it is still interpreted too narrowly. It is seen as tool adoption rather than operating model transformation.

A serious AI First strategy must be anchored in three core questions, and these must be asked in every department, not just technology teams:

1. How can we do our job better?

Not just faster, but with higher quality, deeper insight, and fewer errors. AI should improve decision-making, not just output generation.

2. How can we do it faster?

Speed matters, but only after clarity. AI should compress cycles—design to execution, query to insight, issue to resolution.

3. How can we serve our customers better?

This is the ultimate test. Personalization, responsiveness, and anticipation of needs should all improve measurably.

But beyond these, I would add two more dimensions that are often ignored:

4. How do we increase our “AI leverage per employee”?

This is where token consumption becomes meaningful. Every role should be re-evaluated not by output alone, but by how much AI amplification it is using.

5. How do we ensure the learning loop stays internal?

Using AI is not enough. Capturing the patterns, prompts, workflows, and insights generated through that usage is what creates long-term advantage.

The Behavioural Shift Organizations Must Address

There are two opposing behaviours emerging in the AI transition, both of which need correction.

The first is the ridicule of AI usage. Work that is visibly AI-assisted is often dismissed as less valuable. This mindset is dangerous. In a world where AI is foundational, the ability to use it effectively is a sign of capability, not weakness.

The second is the illusion of intelligence. Individuals presenting AI-generated output as their own thinking creates a false sense of competence. Organizations must encourage usage, but also enforce understanding.

AI should augment thinking, not replace it.

The Real Risk: Comfort

The biggest risk in this transition is not lack of technology.

It is comfort.

Organizations may feel they are progressing because they are “using AI.” But if that usage is shallow, sporadic, or constrained, they may be falling behind without realizing it.

This is exactly the kind of situation Grove warned about.

The danger is not visible decline.
The danger is invisible stagnation.

The Question That Matters

If you were to step back and look at your organization objectively, ask a simple question:

Are we consuming AI at a level that changes how we operate, or are we just using it enough to feel modern?

Because those two paths lead to very different outcomes.

Closing Thought

AI is not just a tool. It is a new layer of leverage.

And in this layer, advantage will not come from access alone, but from depth of usage and control of the underlying stack.

Those who recognize this early will not just improve productivity.
They will redefine where value is created.

The rest may continue to operate efficiently—
but within someone else’s system.