Here's a fact that sounds made up but isn't: the CEO of AMD and the CEO of Nvidia are cousins. Not distant, need-a-genealogist-to-prove-it cousins. Actual, share-grandparents, saw-each-other-at-family-reunions cousins. Lisa Su runs AMD, a $250 billion semiconductor company that spent most of its existence as the scrappy underdog everyone counted out. Jensen Huang runs Nvidia, a $4 trillion colossus that has become the most important company in the AI revolution and, depending on the day, the most valuable company on planet Earth.
One of them built a technological empire so dominant that every major tech company on the planet is begging to buy its chips. The other took over a company so broken it was worth less than what Facebook paid for WhatsApp, and through a combination of brilliant engineering, ruthless strategy, and what I can only describe as weaponized patience, turned it into the only credible threat to her cousin's throne.
If you invested $10,000 in AMD in 2014 when Lisa Su became CEO, you'd be sitting on roughly $600,000 today. That's a 60x return. Not from some pre-IPO lottery ticket or a meme stock pump, but from a CEO who systematically dismantled every assumption the industry had about what AMD could be.
And yet the story isn't over. It might not even be in the third act. Because the AI chip market — the market Lisa Su is betting AMD's entire future on — is still in its infancy. And what happens next will determine whether AMD becomes a trillion-dollar company or whether Jensen Huang's empire is simply too massive, too entrenched, and too brilliantly constructed for even family to compete.
Let me walk you through how this happened, why the numbers are both extraordinary and terrifying, and why the answer to "who wins?" might not be "either/or" but something far more interesting.
The PhD Who Became a Corporate Assassin
Lisa Su's origin story is one of those quiet, academic beginnings that you only recognize as the first chapter of an epic in hindsight. Born in Tainan, Taiwan, in 1969, she moved to New York as a child with parents who emphasized education with the kind of intensity that leaves no room for debate. Her father was a statistician. Her mother, an entrepreneur. By the time she was a teenager, she was already taking things apart — calculators, clock radios, whatever she could get her hands on — with the specific goal of understanding how they worked.
She went to MIT. Obviously. And while she was there, pursuing a PhD in electrical engineering, she published a research paper that seemed, at the time, like the kind of thing only other electrical engineers would care about. The paper dealt with silicon-on-insulator (SOI) technology — essentially, a way to make transistors faster and more power-efficient by building them on a thin layer of insulating material instead of directly on bulk silicon.
That "obscure" research became one of the core manufacturing techniques that powered the global semiconductor industry for the next two decades. Every chip in every phone, laptop, and server you've used in the last twenty years owes at least a partial debt to the work Lisa Su did as a graduate student.
But here's the thing about Lisa Su that separates her from the thousands of brilliant PhDs who publish important papers and then disappear into comfortable corporate research labs: she wanted to run things.
After MIT, she joined IBM — not as a researcher locked away from business decisions, but as someone who systematically maneuvered herself toward the intersection of technology and power. Her breakthrough moment came when she was appointed technical assistant to Lou Gerstner, IBM's legendary CEO who is widely credited with saving the company from irrelevance in the 1990s.
Working directly with Gerstner taught Su a principle she'd repeat throughout her career: "The semiconductor industry is about playing the long game. The decisions you make today determine your fate many years down the road." It's a simple idea. It's also the idea that, a decade later, would convince her to make the most counter-intuitive bet in AMD's history.
Think about what that role meant. At 30-something, Lisa Su was watching one of the greatest corporate turnarounds in history from the CEO's office. She wasn't reading about it in Harvard Business Review. She was in the room where the decisions were made, absorbing the brutal calculus of when to cut, when to invest, and when to bet the company on a direction nobody else believed in.
She spent over a decade at IBM, then moved to Freescale Semiconductor as SVP and General Manager. Freescale was a spinoff of Motorola's semiconductor division — exactly the kind of messy, mid-tier chip company where an executive learns how to operate with limited resources, imperfect options, and the constant threat of being swallowed by larger competitors.
Every move was preparation. MIT gave her the technical foundation. IBM gave her the playbook for corporate warfare. Freescale gave her the scar tissue. By the time AMD came calling in 2012, Lisa Su had spent twenty years building the exact combination of skills, instincts, and tolerance for pain that a catastrophically broken semiconductor company would require.
She didn't inherit a turnaround. She inherited a hospice patient. And what she did next was either visionary leadership or corporate insanity. The line between those two things, it turns out, is about eight years.
$2 Billion vs. $180 Billion
Let's set the scene, because it's important to understand just how bad things were when Lisa Su took over.
The year is 2014. AMD's market capitalization is roughly $2 billion. To put that in perspective, WhatsApp was acquired by Facebook that same year for $19 billion. Instagram had been bought two years earlier for $1 billion, and it was a photo-sharing app with 13 employees. AMD — a company with decades of history, thousands of employees, and fundamental patents in x86 processor architecture — was worth about the same as a couple of well-funded startups.
Meanwhile, Intel — AMD's eternal rival, the company that essentially invented the modern microprocessor — was sitting at a market cap of $180 billion. That's not a competitive gap. That's the Grand Canyon. Intel was 90 times larger than AMD, had virtually unlimited R&D budget, dominated the server market with over 99% share, and had its own world-class fabrication facilities. AMD was buying foundry time from whoever would sell it to them.
Su's first moves were brutal. She cut 7% of the workforce. She renegotiated supplier contracts. She made desperate but strategically sound deals with Sony and Microsoft to build custom chips for the PlayStation 4 and Xbox One gaming consoles. These console deals weren't glamorous — the margins were thin, and chip nerds sneered at the idea of AMD becoming a console component supplier — but they generated critical cash flow at a moment when the company needed every dollar to survive.
But cost-cutting and console deals were tourniquet medicine. They stopped the bleeding, but they didn't cure the disease. The disease was that AMD's products, particularly in the server and data center market, were fundamentally uncompetitive. Intel's Xeon processors were so dominant that data center managers didn't even consider AMD an option. Choosing AMD for your server infrastructure in 2014 was like bringing a knife to a tank battle — technically you're armed, but nobody's worried about you.
And this is where Lisa Su made the decision that, in retrospect, separated her from every other turnaround CEO who's ever been given a dying company and a prayer.
She stopped selling AMD's existing server chips entirely. The one product line that was still generating some server revenue — the Opteron series — was allowed to fade away. Instead, Su redirected every available engineering resource toward designing an entirely new processor architecture from a blank sheet of paper. She was betting that AMD's only hope wasn't to iterate on what it had, but to invent something the industry had never seen.
Think about how insane that sounds. You're the CEO of a company that's hanging on by its fingernails. Your stock is trading under $3. Analysts are openly speculating about bankruptcy. And your big strategic move is to stop selling the products that are keeping you alive so you can build something that won't be ready for years?
That's not normal CEO behavior. Normal CEOs cut costs, protect existing revenue, and pray for a macroeconomic tailwind. Lisa Su looked at a $2 billion company fighting a $180 billion competitor and decided the only rational response was to start over from scratch. She was playing Gerstner's long game — and the clock hadn't even started yet.
Lego Blocks vs. Monoliths
To understand what Lisa Su built, you need to understand the problem she was solving. And the problem is actually pretty simple once you strip away the jargon.
For decades, every major processor company on Earth — Intel, AMD, everyone — built their chips the same way. They'd take a single, massive piece of silicon and cram as many transistors as physically possible onto it. One chip. One giant die. The bigger and more powerful you wanted the processor, the bigger the die had to be. This is called monolithic design, and it had been the industry standard since before most of us were born.
The problem with monolithic design is the same problem with building a skyscraper out of a single poured block of concrete: it works, sort of, until it doesn't. As chips got more complex and the number of transistors climbed into the billions, the manufacturing yield — the percentage of chips that come off the production line actually working — started to crater. One tiny defect anywhere on that massive die, and the entire chip goes in the garbage. The bigger the chip, the more likely a defect, and the more expensive each failure.
Intel solved this problem by spending billions on the most advanced fabrication facilities in the world. If your process is good enough, you can brute-force your way to acceptable yields. It worked for a while. And then physics started to disagree.
Lisa Su and her engineering team, led by the brilliant chip architect Mike Clark, took a fundamentally different approach. Instead of building one giant chip, they would build multiple smaller chips and connect them together.
MONOLITHIC DESIGN
CHIPLET DESIGN
Imagine you're building a skyscraper. The old way: you pour one enormous block of concrete and hope it's perfect. The new way: you prefabricate modular rooms in a factory — each one tested, each one verified — and then you snap them together like architectural Lego blocks. A defective room? Replace it. You don't tear down the building.
AMD called this approach chiplets. The compute cores lived on one die. The I/O controllers lived on another. The memory interfaces on a third. Each chiplet could be manufactured at whatever process node made the most sense for its function — cutting-edge 7nm for the cores, cheaper 14nm for the I/O. You got the performance of a massive chip with the manufacturing efficiency of much smaller ones.
The semiconductor industry thought this was somewhere between ambitious and delusional. The conventional wisdom was that the performance penalty of connecting separate dies would negate any manufacturing advantage. Intel, in particular, was publicly dismissive. They called it a workaround for companies that couldn't afford to build real chips.
Then AMD launched the Zen architecture in 2017.
Zen wasn't just competitive with Intel. It was better. Clock for clock, the first Zen chips traded blows with Intel's best, and subsequent generations — Zen 2, Zen 3, Zen 4 — widened the gap further. Server customers who hadn't considered AMD in a decade suddenly had no choice but to evaluate them. The performance was there. The efficiency was there. And the cost per core was dramatically lower.
By 2023, virtually every major processor company in the world was adopting chiplet-based design. Intel launched its own chiplet architecture. Apple's M-series chips use a form of heterogeneous integration. Even Nvidia's next-generation designs incorporate chiplet concepts. The approach that the industry called a coping mechanism for a second-rate company became the defining manufacturing innovation of the decade.
Lisa Su didn't just save AMD with chiplets. She forced the entire semiconductor industry to rethink how processors are built. The woman running a $2 billion company with a $403 million annual loss had, in the space of three years, invented the future of chip design.
And she was just getting started.
53 Years to Catch a Train
In 2022, something happened that hadn't happened in 53 years of AMD's existence: AMD passed Intel in market capitalization. Let that number sink in. Fifty-three years. AMD was founded in 1969, the same year humans walked on the moon. For more than half a century, through oil crises, recessions, the dot-com boom, the dot-com bust, the iPhone, the cloud revolution, and a global pandemic — through all of it, Intel was bigger. Always bigger. Incomparably bigger.
And then, one Wednesday in 2022, it wasn't.
The market cap crossover wasn't some fluke driven by Intel having a bad day (though Intel was, in fact, having a very bad decade). It was the mathematical expression of everything Lisa Su had built since 2014. Zen chips were eating Intel's lunch in servers. EPYC processors were winning data center contracts that Intel had held for a generation. AMD's revenue had roughly quadrupled in Su's tenure, while margins expanded from survival-level to genuinely healthy.
But let's be honest about the other side of this equation: Intel imploded. Under a series of CEOs who seemed allergic to urgency, Intel made a catastrophic bet on keeping its fabrication technology in-house while the rest of the industry moved to TSMC. When Intel's 10nm process (later relabeled "Intel 7" in a branding move that fooled precisely nobody) ran years behind schedule, the company found itself selling processors manufactured on technology that was a generation behind AMD's.
| Metric | AMD (2014) | AMD (2022) | Intel (2022) | Vibe |
|---|---|---|---|---|
| Market Cap | $2B | $150B+ | ~$120B | Unthinkable |
| Server Market Share | <1% | ~24% | ~76% | Still climbing |
| Revenue | $5.5B | $23.6B | $63B | 4x growth |
| Process Node | 28nm (GlobalFoundries) | 5nm (TSMC) | Intel 7 (10nm) | Tables turned |
| CEO Tenure | Day 1 | 8 years | Pat Gelsinger (1 yr) | Continuity wins |
Lisa Su had pulled off what many considered impossible. She'd taken a company that was a rounding error on Intel's balance sheet and turned it into Intel's worst nightmare. The woman who learned long-game thinking from Lou Gerstner had executed a strategy that took eight years to materialize, in an industry where most CEOs are thinking about the next quarter.
It was, by any objective measure, one of the greatest CEO performances in the history of the technology industry. A case study that will be taught at business schools for decades.
And she had approximately zero seconds to celebrate. Because the biggest threat wasn't coming from Intel anymore. It was coming from someone she'd known since childhood.
Enter the Cousin
Jensen Huang wasn't supposed to be the final boss. For most of its history, Nvidia was a graphics card company. A very successful graphics card company, sure — the kind of company that gamers cared deeply about and Wall Street largely ignored. Nvidia made GPUs that rendered explosions in Call of Duty and shadows in Minecraft. Important for a $200 billion entertainment industry, but not exactly the stuff of global economic transformation.
Then, starting around 2012, a funny thing happened. Researchers in artificial intelligence discovered that the same chips designed to render millions of pixels per second were also absurdly good at the kind of parallel mathematical computations that power machine learning. A GPU could process thousands of matrix multiplications simultaneously — exactly the operation that neural networks need to learn, improve, and eventually do things like write poetry, generate images, and pass the bar exam.
Jensen Huang saw this coming before almost anyone else in the industry. He pivoted Nvidia from a gaming company to an AI infrastructure company with the kind of strategic foresight that makes you wonder if the man has a time machine in his jacket (the leather jacket, obviously — it's always the leather jacket). By the time ChatGPT exploded into public consciousness in late 2022, Nvidia was the only company on Earth with the hardware, software, and manufacturing relationships to supply the suddenly insatiable demand for AI compute.
The result was the most spectacular stock price increase in the history of large-cap American equities. Nvidia's market cap went from roughly $350 billion in early 2023 to over $4 trillion by early 2025. Jensen Huang, who had spent decades building what he called "accelerated computing" while Wall Street analysts politely suggested he focus on gaming, was suddenly running the most important company in the most important technology revolution since the internet.
And here's where the family thing stops being a fun factoid and starts being genuinely meaningful. Lisa Su and Jensen Huang are related. They share grandparents. They both grew up in families that emigrated from Taiwan and pushed their children toward technical excellence. They are, by any definition, products of the same culture, the same values, the same relentless immigrant work ethic that has produced an outsized share of Silicon Valley's leadership class.
They're also now leading the two companies that will likely determine the architecture of the AI era. Nvidia has a 16:1 advantage in market cap and a 20:1 advantage in AI chip revenue. AMD is, once again, the tiny competitor staring up at a seemingly unassailable giant.
If you're experiencing déjà vu, you should be. Because this is exactly where Lisa Su was in 2014. Different opponent, same impossible odds. The question is whether lightning can strike twice — or whether Jensen Huang has built something so dominant that even his cousin's proven genius can't crack it.
To answer that, we need to look at the numbers. All of them.
The Numbers Don't Lie (But They Don't Tell the Whole Truth)
Let me put AMD's recent financial performance in terms that are impossible to misunderstand: the company's AI GPU revenue went from $100 million in 2023 to $5 billion in 2024. That is a 50x increase in twelve months. I've been writing about companies for a long time, and I can count on one hand the number of times I've seen a major corporation achieve 50x growth in any product category in a single year. This isn't startup math. This is a $25 billion revenue company growing one of its divisions at a rate that would make a Series A founder blush.
The broader data center segment — which includes both traditional server CPUs (where AMD continues to take share from Intel) and the newer AI GPU business — grew 94% year-over-year. This division now accounts for roughly half of AMD's total revenue and 58% of operating profits. AMD is no longer a PC chip company that dabbles in data centers. It is, functionally, an AI and data center company that also sells PC chips.
The growth trajectory continued into 2025. Q1 revenues hit $7.44 billion, up 36% year-over-year, marking the fourth consecutive quarter of accelerating growth. Not decelerating. Not plateauing. Accelerating. Each quarter growing faster than the last, a pattern that screams either genuine product-market fit or an unsustainable bubble, depending on how much caffeine you've consumed while reading this.
On the investment side, AMD poured $6.5 billion into R&D last year. That's a staggering number for a company of AMD's size — roughly 26% of total revenue devoted to building the next generation of products. For context, Intel spent about 28% and Nvidia about 18%. Su is spending at nearly the same R&D intensity as Intel while growing five times faster. She's playing the long game again, trading short-term profits for the chance to be competitive in a market that's growing faster than any market in technology history.
Now let's talk about the hardware, because this is where it gets interesting.
| Spec | AMD Mi300X | Nvidia H100 | AMD Mi350 | Nvidia Blackwell |
|---|---|---|---|---|
| Compute (TFLOPS) | 1,307 | 1,979 | ~35x vs Mi300X | ~30x vs H100 |
| Memory (HBM) | 192 GB | 80 GB | 288 GB (est.) | 192 GB |
| Memory Bandwidth | 5.3 TB/s | 3.35 TB/s | TBD | 8 TB/s |
| Architecture | CDNA 3 (chiplets) | Hopper (monolithic) | CDNA 4 | Blackwell |
| Key Advantage | Memory capacity | Raw compute | Next-gen memory | Full-stack ecosystem |
AMD's current flagship AI chip, the Mi300X, is an interesting product. It has less raw compute power than Nvidia's H100 — about 1,300 teraflops versus 2,000. Think of it as the difference between a Ferrari and a cargo truck. The Ferrari is faster on a straightaway, but the cargo truck carries more. The Mi300X has 192 GB of high-bandwidth memory versus the H100's 80 GB — more than double. For AI workloads that require loading enormous models into memory (and increasingly, most important AI workloads do), that memory advantage is meaningful.
Neither company is standing still. Nvidia's upcoming Blackwell architecture promises 30x faster inference performance. AMD's Mi350 counters with a claimed 35x improvement over the Mi300X. This is a classic hardware arms race, each company leapfrogging the other with every generation, each generation obsoleting the last within 18 months.
And then there's the valuation. AMD trades at roughly 9 times revenue. Nvidia trades at 26 times revenue. That means AMD is valued at a 60% discount to Nvidia on a revenue-multiple basis. If you believe AMD can continue growing its AI business at anything close to the current rate, that discount is either a screaming opportunity or a market correctly pricing in the enormous risks we're about to discuss.
Because the numbers, as compelling as they are, don't tell you the most important part of the story.
The Moat That Code Built
Here's the uncomfortable truth that AMD bulls need to sit with: Nvidia's biggest advantage isn't hardware. It's software.
The word is CUDA. It stands for Compute Unified Device Architecture, and it's a programming platform that Nvidia released in 2006 — almost two decades ago — that allows developers to write software that runs on Nvidia GPUs. CUDA is to Nvidia what iOS is to Apple: the thing that keeps people locked in, the thing that makes switching costs astronomical, the thing that turns a hardware product into an ecosystem.
Here's why CUDA matters so much. Over the past 18 years, hundreds of thousands of AI researchers, data scientists, and software engineers have invested thousands of hours learning to write code in CUDA. The major AI frameworks — PyTorch, TensorFlow, JAX — were all built with CUDA optimization as a primary design constraint. The entire toolchain of modern AI development, from training a model to deploying it in production, has been built on the assumption that you're running Nvidia hardware.
AMD has its own equivalent called ROCm (Radeon Open Compute). ROCm is open-source, which in theory makes it more flexible and transparent. In practice, it's years behind CUDA in maturity, documentation, community support, and the sheer breadth of libraries and pre-optimized models available. Switching from CUDA to ROCm doesn't just mean learning a new framework. It means rewriting significant portions of your AI infrastructure, retesting everything, and accepting the risk that obscure bugs and performance issues will surface in production.
Most AI developers won't do this. Not because they're lazy. Because the cost of switching is enormous and the benefit is uncertain. Why would you rewrite a working codebase to save some money on hardware when a bug in production could cost your company millions? This is the chicken-and-egg problem that keeps AMD's software team up at night: developers won't switch to ROCm without better tools and broader support, but AMD can't build better tools and broader support without more developers using the platform.
AMD Risk Assessment
And then there's the manufacturing bottleneck. Both AMD and Nvidia rely on TSMC — Taiwan Semiconductor Manufacturing Company — to fabricate their most advanced chips. But Nvidia's massive revenue advantage translates directly into massive purchasing power. Nvidia has locked up a disproportionate share of TSMC's cutting-edge capacity for years to come. Even if AMD designs a chip that's competitive on paper, there's a real question about whether it can manufacture enough of them to meet demand. You can't sell chips you can't make.
AMD's hardware is good and getting better. Its financial trajectory is genuinely impressive. But it's attacking a competitor whose moat isn't built from silicon — it's built from code. Nvidia's CUDA ecosystem is the most powerful software lock-in in the history of computing infrastructure, and AMD's answer to it is a platform that most developers have either never heard of or actively avoid. The data center customers AMD needs to win over are the most conservative buyers in technology — organizations investing $100 million or more in infrastructure that needs to work flawlessly for years. Telling them to switch from the proven market leader to the promising challenger is an extraordinarily hard sell, no matter how good your specs look on a slide deck.
I want to be clear about something: this isn't FUD. This isn't me trying to scare you out of a position. This is the reality of AMD's competitive landscape. The bear case for AMD isn't that the company is bad. It's that Nvidia might be too good — that the combination of hardware leadership, software ecosystem, manufacturing scale, and customer loyalty creates a flywheel that's essentially impossible to disrupt from the outside.
Lisa Su knows all of this. She's lived this exact scenario before, when Intel's advantages seemed equally unassailable. The question is whether the playbook that worked against Intel can work against a competitor whose moat is fundamentally different — and arguably deeper.
A $500 Billion Table With Room for Two
Lisa Su has publicly stated that she believes the AI chip market will grow to $500 billion by 2028. For context, the entire semiconductor industry was worth about $527 billion in 2024. Su is saying that AI chips alone will be roughly the size of the entire chip industry today, in just three years.
If that sounds aggressive, consider the trajectory. AI chip spending was essentially zero in 2020. By 2024, Nvidia alone sold $100 billion worth of them. Hyperscalers — Microsoft, Google, Amazon, Meta — are collectively spending hundreds of billions on AI infrastructure, and none of them are showing signs of slowing down. Mark Zuckerberg has said Meta will spend over $60 billion on AI infrastructure in a single year. Microsoft is building data centers faster than some countries build highways.
In a market this big and growing this fast, AMD doesn't need to beat Nvidia. That's the insight that gets lost in the "AMD vs. Nvidia" framing. AMD doesn't need 50% market share. It doesn't need 30%. If Lisa Su's estimate is correct and the market reaches $500 billion, AMD capturing just 10% would mean $50 billion in annual AI chip revenue — roughly double AMD's entire 2024 revenue from all products combined.
There's also a structural argument for why hyperscalers want AMD to succeed. No CTO at Google or Microsoft is comfortable being entirely dependent on a single supplier for the most critical component of their infrastructure. Nvidia's dominance gives it extraordinary pricing power, and its customers know it. Every major cloud provider has teams working on custom AI chips (Google's TPUs, Amazon's Trainium, Microsoft's Maia), and they're all actively evaluating AMD as an alternative. They're not doing this because they love AMD. They're doing it because single-supplier dependency is a strategic vulnerability, and these are companies that think about strategic vulnerabilities the way normal people think about forgetting their umbrella.
Then there's the historical argument. Lisa Su has done this before. Not metaphorically. Literally this exact thing. She took a company with less than 1% server market share and grew it to 24% against a competitor 90 times larger. She did it by designing a fundamentally better architecture, executing flawlessly on a multi-year roadmap, and waiting for the incumbent to trip over its own complacency.
Intel tripped. Nvidia, under Jensen Huang's leadership, is unlikely to make the same mistake. Jensen is many things, but complacent isn't one of them. He's the CEO who reportedly sleeps four hours a night and has been spotted personally reviewing chip designs at 2 AM. But even the most vigilant competitor can't prevent the market from growing so large that second place becomes an incredibly profitable position.
And that, ultimately, is the AMD thesis. Not that Lisa Su will dethrone Jensen Huang. Not that AMD will become the new Nvidia. But that the AI infrastructure market is about to become so preposterously, absurdly, historically enormous that there's room at the table for more than one chip company — and Lisa Su has spent her entire career proving she can fight her way to a seat.
Look, I'm not going to tell you what to do. That's not my job, and frankly, if a stranger on the internet is the deciding factor in your investment decisions, you've got bigger problems than which semiconductor stock to buy. What I will tell you is this: the story of AMD under Lisa Su is one of the most remarkable corporate turnarounds in modern business history. It is also a story that might be only halfway finished.
Her cousin built a $4 trillion empire on the back of an AI revolution he saw coming a decade before anyone else. She took a $2 billion wreck and turned it into a $250 billion contender. They're family, they're rivals, and they're both building the infrastructure that will power the next era of human technology.
One of them might be right. Both of them might be right. That's the thing about markets big enough to matter — sometimes the winners aren't fighting over the same pie. They're watching the pie grow large enough that the argument becomes irrelevant.
Whether that's optimism or delusion depends on a question nobody can answer with certainty: just how big is the AI market going to get? If it's as big as Lisa Su thinks, $500 billion in chip demand with years of growth still ahead, then AMD at a 60% discount to Nvidia might be one of the most compelling asymmetric bets in the market. If AI spending hits a wall, if the hyperscalers pull back, if the models stop improving at the rate that justifies the infrastructure investment — then AMD's discount exists for a reason, and the CUDA moat ensures Nvidia weathers the storm while AMD takes the hit.
Lisa Su is betting on the former. Jensen Huang, interestingly, is also betting on the former. The two smartest people in the semiconductor industry — who happen to be related — agree on the size of the opportunity. They just disagree on who gets to capture it.
Thanksgiving dinner at the Su-Huang family table must be absolutely fascinating.