Saturday, March 28, 2026

AI First or Be Left Behind: The Real Question Is Not Adoption, It’s Consumption

There are moments in technology where the shift is not in what we use, but in how deeply we use it. Those moments are uncomfortable, because they force a different kind of question—not “Are we adopting this?” but “Are we using enough of it to stay relevant?”

I was reminded of this while revisiting two very different but connected ideas.

Andrew Grove
once articulated a simple but unsettling principle: the leaders who survive are not the most confident ones, but the ones who sense a shift early and act before the evidence becomes obvious. That instinct—almost paranoia—is what forces change before it is too late.

Decades later, in a completely different context, Jensen Huang made a statement that at first sounds exaggerated, almost provocative. He suggested that if a highly paid engineer is not consuming a significant amount of AI tokens, it should raise concern. The comparison he draws is not subtle: working without AI in today’s world is similar to a chip designer refusing to use modern design tools.

Strip away the rhetoric, and what remains is a powerful signal.

The shift in AI is not about access.
It is about consumption intensity.

The Shift: From Having AI to Using AI at Scale

Most organizations today are already “using AI.”

Teams generate content. Developers use copilots. Analysts experiment with models. Presentations look sharper, code is written faster, emails sound more refined.

On the surface, adoption looks healthy.

But this is where a dangerous illusion begins.

Because the real shift is not:

Do you have AI?

It is:

How much AI are you actually consuming to amplify your work?

In earlier technology cycles, tools improved productivity linearly. In AI, usage directly correlates with capability. The more you use, the more you learn, the more you refine, and the more leverage you create.

Which brings us to a slightly uncomfortable but necessary idea:

Under-using AI is not efficiency. It is underperformance.

Why Token Consumption Is the New Metric


Token consumption may sound like a technical or billing concept, but it is actually a proxy for something much deeper:

  • How much intelligence are you leveraging?

  • How much experimentation are you doing?

  • How much iteration are you allowing?

  • How deeply is AI embedded in your workflow?

An engineer who uses AI occasionally is improving efficiency.
An engineer who uses AI extensively is redefining how work is done.

That difference compounds quickly.

But there is a second-order effect that is even more important.

Where does the value of that consumption go?

If all token consumption flows to external platforms, then while productivity improves internally, economic value accumulates externally. Over time, this creates a dependency loop—where capability increases, but control does not.

This is where the conversation shifts from productivity to strategy.

The Strategic Layer: Why Sovereign AI Matters

The idea of sovereign cloud and in-house models is often misunderstood as a defensive or political stance. In reality, it is an economic and architectural necessity in an AI-driven world.

When organizations build or participate in sovereign AI ecosystems:

  • Token economics can be partially internalized

  • Data remains within controlled boundaries

  • Models evolve with contextual relevance

  • Costs become predictable and optimizable

  • Most importantly, learning stays within the organization

Because every AI interaction is not just usage—it is training, refinement, and feedback.

If that loop sits outside, the intelligence compounds elsewhere.

If it sits inside, the advantage compounds internally.

This is the real reason why sovereign infrastructure is not optional in the long term.

What an AI First Strategy Actually Means

“AI First” has become a popular phrase, but in most organizations, it is still interpreted too narrowly. It is seen as tool adoption rather than operating model transformation.

A serious AI First strategy must be anchored in three core questions, and these must be asked in every department, not just technology teams:

1. How can we do our job better?

Not just faster, but with higher quality, deeper insight, and fewer errors. AI should improve decision-making, not just output generation.

2. How can we do it faster?

Speed matters, but only after clarity. AI should compress cycles—design to execution, query to insight, issue to resolution.

3. How can we serve our customers better?

This is the ultimate test. Personalization, responsiveness, and anticipation of needs should all improve measurably.

But beyond these, I would add two more dimensions that are often ignored:

4. How do we increase our “AI leverage per employee”?

This is where token consumption becomes meaningful. Every role should be re-evaluated not by output alone, but by how much AI amplification it is using.

5. How do we ensure the learning loop stays internal?

Using AI is not enough. Capturing the patterns, prompts, workflows, and insights generated through that usage is what creates long-term advantage.

The Behavioural Shift Organizations Must Address

There are two opposing behaviours emerging in the AI transition, both of which need correction.

The first is the ridicule of AI usage. Work that is visibly AI-assisted is often dismissed as less valuable. This mindset is dangerous. In a world where AI is foundational, the ability to use it effectively is a sign of capability, not weakness.

The second is the illusion of intelligence. Individuals presenting AI-generated output as their own thinking creates a false sense of competence. Organizations must encourage usage, but also enforce understanding.

AI should augment thinking, not replace it.

The Real Risk: Comfort

The biggest risk in this transition is not lack of technology.

It is comfort.

Organizations may feel they are progressing because they are “using AI.” But if that usage is shallow, sporadic, or constrained, they may be falling behind without realizing it.

This is exactly the kind of situation Grove warned about.

The danger is not visible decline.
The danger is invisible stagnation.

The Question That Matters

If you were to step back and look at your organization objectively, ask a simple question:

Are we consuming AI at a level that changes how we operate, or are we just using it enough to feel modern?

Because those two paths lead to very different outcomes.

Closing Thought

AI is not just a tool. It is a new layer of leverage.

And in this layer, advantage will not come from access alone, but from depth of usage and control of the underlying stack.

Those who recognize this early will not just improve productivity.
They will redefine where value is created.

The rest may continue to operate efficiently—
but within someone else’s system.


Tuesday, February 24, 2026

India AI: Beyond the Applause, Where Do We Really Stand

The applause was deafening.

Screens glowed with GPU counts. Words like Sovereign AI, AI Factories, and National Scale drifted confidently across the panels. There was a palpable pride in the room—and rightly so. But as I sat there listening, I found myself thinking less about what was being celebrated and more about what remains unresolved.

Applause is easy. Execution is not.

Somewhere between the ambition of the stage and the reality of the shop floor lies the question that actually matters: Beyond the standing ovation, where do we actually stand?


Momentum is Real. Direction is Unclear.


Let’s be clear: India is not "late" to AI. We possess what most nations envy: Digital Public Infrastructure (DPI) at a population scale, immense talent, and clear political intent.

We are building GPU clusters. We are talking about indigenous compute. As a staunch believer in the Make in India hardware ecosystem, I see this as more than just industry—it is strategic insulation. Compute sovereignty isn't a luxury; it’s the floor. Building infrastructure locally isn't symbolism; it’s the only way to control our technological destiny.


Are We Building Engines—or Just Renting Them?

Much of the current excitement circles around hardware:

Who has the GPUs?

How many Petaflops?

Whose cluster is bigger?

These are the right metrics for a start, but the wrong metrics for leadership. True sovereignty isn't just hosting someone else’s model on a local rack. It’s about owning the entire stack: orchestration, energy resilience, and optimization. The Insight: Make in India cannot stop at the rack. It must extend to the software stacks and security layers that make the hardware hum.


From Model Size to Market Maturity

At summits, the gravity always pulls toward model size. Bigger. Faster. More languages. But the question I keep asking is: How many Indian enterprises are truly AI-ready?

Not piloting. Not "exploring." Deploying.

How many banks have AI fully baked into credit risk workflows?

How many factories rely on predictive maintenance every single shift?

How many government departments have operational AI reducing citizen friction?

AI only becomes meaningful when it "disappears" into the process. We are world-class at Proof of Concepts (PoCs). We must now become world-class at Production.


The Real Advantage We Don’t Fully Use

India’s greatest asset isn’t a specific model; it’s our data footprint. UPI transactions, multilingual nuances, and diverse behavioral data are our "oil."

But scale without structure is just noise. Data governance, curation, and standardization are less glamorous than unboxing a new GPU, but they determine whether our AI is intelligent by design or accidental by data. Sovereign AI: A Word We Must Use Carefully

Sovereignty isn't isolation. It isn’t replacing global brands with domestic slogans. It is owning enough of the stack—Compute, Data, Security, and Deployment—to avoid strategic dependency. Resilience is built in the lab and the factory, not declared at a press conference.


Talent: Scale vs. Depth

We produce extraordinary engineers, but frontier AI demands more than engineering volume. It requires "Patient Capital"—investors willing to fund long-horizon research and deep model experimentation.

The question for our ecosystem is simple: Are we building the operators of AI, or the originators? Both are needed, but only one writes the global rules.


So, Where Do We Stand?

India is not behind, but we are not yet ahead. We are at a pivot.

The next decade will not reward announcements or headline numbers. It will reward:

Depth over Display

Deployment over Demos

Ecosystems over Isolated Wins


The applause at the summit was deserved. But applause is a moment. The real work is quiet, technical, and relentless. And it starts now.


A Personal Reflection

Walking out of the summit, I felt two things at once: Optimism, because India is finally serious, and Responsibility, because seriousness must translate into execution.

The question is no longer whether India will participate. The question is whether we will build foundations strong enough to shape the rules—not just follow them.