When the CEO of a Trillion Dollar Company Says Stop Asking About ROI

When the CEO of a Trillion Dollar Company Says Stop Asking About ROI

The wine was flowing. Jensen Huang had just wrapped up a grueling two weeks across Asia, bouncing between Taiwan, Houston, and finally landing at the Cisco AI Summit. By his own admission, he was four or five drinks deep when he sat down with Cisco CEO Chuck Robbins for what was supposed to be a polished fireside chat.

What happened instead was something far more valuable: an unfiltered truth session about the future of business, technology, and human work. No corporate speak. No carefully rehearsed talking points. Just the blunt, slightly tipsy wisdom of someone who's built one of the world's most valuable companies.

And his message? Most executives are thinking about AI completely wrong.

The ROI Question That Shouldn't Be Asked

When Robbins asked what advice Huang would give companies starting their AI journey, the Nvidia CEO didn't hesitate. He dismissed one of business's most sacred rituals: the return on investment calculation.

"I get questions about ROI, and I wouldn't go there," Huang said plainly. "With all technology deployments in the beginning, it's hard to put into a spreadsheet the ROI of a new tool."

This isn't the kind of advice you typically hear from someone running a company with a market cap hovering around four trillion dollars. Wall Street loves spreadsheets. Boards demand quantifiable metrics. CFOs worship at the altar of ROI.

But Huang wasn't speaking as a finance person. He was speaking as someone who's watched innovation die in conference rooms where every experiment needs to justify its existence before it can breathe.

Instead of demanding proof of value upfront, Huang advocates for what he calls letting "a thousand flowers bloom." At Nvidia, the number of AI projects running across the company is, in his words, "out of control, and it's great."

Notice what he just said there. Out of control. And great. In the same breath.

The Control Paradox

For most executives, that phrase would trigger alarm bells. Out of control means risk. It means chaos. It means things happening without approval, without oversight, without someone at the top knowing every detail.

Huang sees it differently. He argues that if you want to be in control, you should first seek therapy, because it's an illusion anyway. Innovation doesn't happen in tidy, controlled environments. It happens in the mess.

"If you want your company to succeed, you can't control it," he explained. Instead, leaders need to shift from controlling to influencing. From demanding permission to encouraging exploration.

The logic is actually quite sound from a risk management perspective. When you're facing a technological shift this profound, nobody knows which tools or approaches will win. Putting all your resources behind one bet is far riskier than spreading them across multiple experiments.

Yes, it makes for a messy garden. But it also means you're less likely to miss the one flower that could have transformed your business.

Parenting Principles Meet Corporate Strategy

Perhaps the most striking part of Huang's philosophy is how he compares corporate innovation to raising children. When someone on his team wants to try a new AI tool, his first response is "yes," followed by "why?"

Not the other way around.

He doesn't make them build a business case first. He doesn't demand they prove it will generate revenue or cut costs. He gives permission first, then explores the reasoning together.

"We never do that at home, but we do it at work," Huang observed. "If your kid wants to try something, the answer should be 'yes,' and then ask why. You don't say, 'Prove to me that this will bring economic success or happiness in the future, or I won't let you do it.'"

It's a simple reframe, but it changes everything. Most companies operate with a default of "no unless you can prove otherwise." Huang flips it to "yes unless there's a compelling reason not to."

The difference matters because it shifts the psychological dynamic. In a culture of default permission, people experiment more freely. They take creative risks. They try things that might not work but could lead somewhere unexpected.

In a culture of default skepticism, people only bring forward ideas they're confident will succeed. Which means the really interesting ideas, the ones that might fail but could also change everything, never see the light of day.

Coding Is Just Typing Now

Buried in Huang's broader message was a statement that probably made a lot of software engineers uncomfortable: coding itself is becoming a commodity.

Not that engineers aren't valuable. But the specific skill of writing code, of translating instructions into syntax, is increasingly something AI can handle. What matters now is different.

Domain expertise. Understanding customer problems. Knowing which questions to ask. These are the skills that separate wheat from chaff in an AI-powered world.

Huang emphasized this point about questions being intellectual property. Nvidia builds AI infrastructure on premises specifically to protect not their answers, but their questions. "What I'm thinking about are my questions," he explained. "The answers are a commodity."

Think about what that means. In a world where AI can generate solutions to well-defined problems almost instantly, the entire competitive advantage shifts to knowing which problems are worth solving. To asking questions nobody else is asking.

You can hire a brilliant coder who doesn't understand your business, and they'll build you perfect solutions to the wrong problems. Or you can work with someone who deeply understands your domain and can articulate the right questions, even if they need AI to help formulate the answers.

Get Your Hands Dirty

Despite advocating for experimental freedom, Huang was adamant about one thing: companies need to actually build stuff themselves, not just rent everything from the cloud.

He used a car analogy. Sure, you could take Uber everywhere. But if you want to really understand how cars work, you need to lift the hood. Change the oil. Get familiar with the components.

"Build something," he urged. "You might discover you're actually insanely good at it. You might discover that you need that skill."

This tactile understanding matters for a few reasons. First, it helps with data privacy and security. When you're running everything through someone else's infrastructure, you're trusting them with potentially sensitive information.

Second, and perhaps more importantly, building your own infrastructure forces you to understand how the pieces fit together. That understanding becomes crucial when you're trying to solve unique problems or push the technology in new directions.

It's the difference between being a power user and being someone who can actually innovate with the technology.

AI in the Loop, Not Humans in the Loop

The standard conversation about AI ethics often centers on keeping "humans in the loop." The idea is that AI should assist humans, not replace them, and that humans should maintain oversight and final decision-making authority.

Huang flipped this on its head. His vision isn't humans in the loop. It's AI in the loop.

What does that mean? Instead of AI being an occasional tool that humans consult, AI becomes integrated into every process, capturing what Huang calls the "life experience" of employees. Every decision, every problem solved, every insight gained becomes part of the company's permanent intellectual property.

This is a fundamentally different model. Rather than keeping AI at arm's length, you weave it into the fabric of how work happens. The AI learns from how your best people think, what questions they ask, how they approach problems.

Over time, that accumulated wisdom becomes an asset that compounds. New employees can learn from it. The organization gets smarter, not just as individuals improve, but as the system itself captures and amplifies expertise.

The Stakes Are Higher Than You Think

Throughout the conversation, Huang kept returning to urgency. Not panic, but genuine urgency. This isn't optional anymore.

"You're not going to lose your job to AI," he said at one point. "You're going to lose your job to someone who uses AI."

The threat isn't the technology itself. It's the competitor who adopts it faster, experiments more boldly, and figures out how to augment their team's capabilities while you're still calculating ROI in spreadsheets.

Huang was clear that companies don't need to be first. Being on the bleeding edge carries its own risks. But they absolutely cannot be last. The window for catching up is closing faster than most executives realize.

This is the first fundamental reinvention of computing in 60 years, according to Huang. We're moving from explicit programming, where humans write detailed instructions, to implicit programming, where humans define intent and AI figures out implementation.

That's not a minor upgrade. That's a complete paradigm shift in how intelligence and computation work.

What This Actually Means for Leaders

So what do you do with all this? How do you actually apply Huang's wine-fueled wisdom?

Start by questioning your defaults. When someone wants to experiment with a new AI tool, what's your instinctive response? If it's "show me the business case first," you might need to recalibrate.

Look at where your company is genuinely adding value. Not the peripheral stuff, but the core work that defines your business. Then ask: what would happen if we gave our people infinite computing power and near-instant access to intelligence for tackling those problems?

Stop trying to save minutes on small tasks. That's optimization thinking. Start asking what previously impossible problems you could solve if constraints weren't an issue. That's abundance thinking.

Give your teams permission to be messy. Yes, this feels uncomfortable. Yes, it will create situations you don't fully control. But the alternative, attempting to predict and plan for every contingency in a rapidly shifting technological landscape, is actually riskier.

Build something yourself. Don't just buy AI services and hope for magic. Get your hands dirty. Understand the components. You don't need to become a machine learning expert, but you should know enough to ask informed questions and spot opportunities others might miss.

And perhaps most importantly, remember that the questions matter more than the answers. In a world where AI can generate solutions quickly, your competitive advantage is knowing which problems are worth solving and which questions are worth asking.

The Uncomfortable Truth

Here's what makes Huang's message so powerful and so unsettling: he's asking leaders to do things that feel deeply wrong according to traditional business wisdom.

Don't demand ROI. Don't stay in control. Don't plan everything carefully. Let chaos bloom.

This goes against decades of management training. It violates the basic principles most executives were taught. It sounds reckless.

But Huang isn't some startup founder with nothing to lose. He's running a company that's become indispensable to the AI revolution, with a market value that rivals small countries. And he's saying that the old playbook, the one that got us here, won't get us where we're going.

The companies that thrive won't be the ones with the tightest controls or the most detailed ROI spreadsheets. They'll be the ones who create environments where innovation can emerge from unexpected places. Where people feel empowered to experiment without needing permission for every step.

They'll be the ones who understand that in a time of fundamental change, the real risk isn't trying things that might fail. It's failing to try enough things.

That's a hard message to swallow, especially if you've built a career on careful planning and risk mitigation. But it's coming from someone who's seen this pattern before, who's built a company that's thrived through multiple technological shifts.

And he was just buzzed enough to say it plainly, without the usual corporate softening.

Maybe we should listen.


Post a Comment

0 Comments