Here's a sentence I never expected to type: the hottest AI company on the planet is headquartered 700 miles northeast of Silicon Valley, in Boise, freaking Idaho, and the reason it exists at all involves frozen potato products and a dentist with a really spacious basement.
The company is Micron Technology. Its stock has nearly quadrupled. Its products are so in-demand that the entire 2026 production run sold out before January 1st. And yet, depending on who you ask, it's either the single most undervalued AI stock on the market or a ticking time bomb that's about to repeat every brutal crash cycle it's been through since the Reagan administration.
Let me walk you through both sides. Buckle up — this one gets weird.
The Company That Shouldn't Exist (A Potato-Funded Origin Story)
In 1978, Micron's founders set up shop in the basement of a dental office. Not a garage, like every Silicon Valley origin myth — a dental office. In Idaho. Their first gig was designing a 64K memory chip for a company called Mostek. When that deal spectacularly collapsed, they made a decision that was, by any rational analysis, completely unhinged: instead of just designing chips, they decided to manufacture them.
To appreciate how insane this was, you need to understand the memory chip business. Memory chips are commodities. A gigabyte from Samsung works exactly like a gigabyte from anyone else. You win on price. You win on scale. You win by having the kind of capital reserves that a startup in the American Mountain West does not, historically, tend to have.
By the early '80s, Japanese conglomerates had essentially colonized the entire memory industry. They had government subsidies, near-infinite capital from their keiretsu structures, and the manufacturing discipline to pump out chips at a pace nobody could match. For a handful of engineers above a dentist's chair in Boise, the odds were somewhere between "laughable" and "statistically indistinguishable from zero."
J.R. Simplot — an Idaho billionaire who built his empire selling frozen french fries to McDonald's — decided to back the local boys. This is the kind of sentence that makes you realize reality has a better imagination than any screenwriter. A guy who figured out how to freeze potatoes at industrial scale essentially bankrolled America's last surviving memory chip company. And it worked.
By 1983, Micron had pulled off something remarkable: their chips were roughly half the size of anything coming out of Japan. What they lacked in capital and scale, they compensated for with engineering obsession. Smaller chips meant more chips per silicon wafer, which meant lower cost per unit. They were out-engineering empires from a dental basement funded by french fry money.
Then 1985 happened, and the Japanese manufacturers decided to end the game. They flooded the global market with below-cost memory chips in a move that annihilated the American semiconductor industry. Intel abandoned memory. National Semiconductor quit. Texas Instruments pulled back. One by one, every American player either folded or fled.
The Japanese giants — NEC, Hitachi, Toshiba, Fujitsu — fell like dominoes. Japan's final holdout, Elpida, went bankrupt in 2012. And who bought the remains? Micron. The dental-basement potato-funded startup from Idaho bought the corpse of Japan's last memory chip company.
Today, the global memory industry is an oligopoly. Three companies control virtually the entire market: Samsung and SK Hynix in South Korea, and Micron in Idaho. That's it. Micron is America's only seat at the table in one of the most strategically vital industries in computing. And right now, AI is turning that seat into a throne.
Why AI Has a Memory Problem (And Why That's Making Micron Stupid Rich)
Ever been chatting with an AI and it suddenly forgets everything you've been talking about? Congratulations, you've just experienced a memory bottleneck — the exact kind of problem that chipmakers like Nvidia need Micron to solve if AI is going to become anything more than a very expensive autocomplete engine.
Here's the uncomfortable truth that the "just buy Nvidia" crowd doesn't love to discuss: compute is no longer AI's biggest bottleneck. Memory is. The size of an AI's context window — how much it can "remember" during a conversation — is physically limited by how much high-speed memory is strapped to the GPUs running the model. And there simply isn't enough of it.
To understand why this matters for Micron's business, you need to understand the three types of memory that computers use, and why one of them has become the most sought-after product in the technology industry.
NAND (The Library): Long-term storage. Data lives here permanently, even when the power's off. This is where AI training datasets — terabytes of text, images, code — sit on flash drives. About 20% of Micron's revenue.
DRAM (The Desk): What your computer is actively working on right now. When you run a model, data moves from NAND into DRAM so the processor can grab it quickly. This is Micron's bread and butter — roughly 80% of revenue.
HBM (The Desk on Steroids, Welded to Your Brain): The most in-demand product in technology. Instead of laying memory flat on a circuit board, HBM stacks chips vertically like a skyscraper and places them right next to the GPU. This creates a massive data pipeline that can feed AI processors at mind-bending speeds. This is where things get crazy.
The numbers on HBM demand tell a story that borders on the absurd. Nvidia's H100 chip — the workhorse that powered the first wave of AI — used around 80 GB of HBM. The newer Blackwell architecture? 192 GB. The upcoming Vera Rubin platform pushes that to 288 GB. And the Rubin Ultra, expected in 2027, could pack a full terabyte of HBM per chip.
| Nvidia GPU Generation | HBM per Chip | Growth vs. H100 |
|---|---|---|
| H100 | 80 GB | Baseline |
| Blackwell | 192 GB | 2.4× |
| Vera Rubin | 288 GB | 3.6× |
| Rubin Ultra (est. 2027) | ~1,000 GB | 12.5× |
That's a projected 10-12× increase in memory per GPU in just a few years. This is why Micron sold out of its entire 2026 HBM production capacity before this year even started. Let that sink in. A company with a $400 billion market cap has customers pre-buying their output years in advance like it's a limited-edition sneaker drop.
There's also a vicious second-order effect that nobody outside the semiconductor industry is talking about. Manufacturing HBM is absurdly intensive. You're stacking chips vertically and drilling thousands of microscopic holes through silicon to connect layers. Yields are lower, and it takes roughly 3× the wafer capacity to make one HBM chip versus a standard memory chip. Every HBM chip produced is three regular memory chips that don't get produced.
This has created an industry-wide shortage that's rippling through the entire economy. Consumer RAM prices surged nearly 200% in the past year. PCs got more expensive. SanDisk became the best-performing stock in the S&P 500 because they jacked up their NAND prices in response to the Big Three prioritizing HBM instead. The memory market has gone absolutely haywire.
The Numbers Are Pornographic
There's really no more elegant way to put it. Micron's recent financials look like someone photoshopped an earnings report for a meme.
The company nearly tripled its bottom-line profit in twelve months. Operating income grew 182%, significantly faster than revenue — meaning they're not just selling more, they're making more on every dollar. Free cash flow hit a new all-time quarterly record.
But here's the number that made analysts choke on their Bloomberg terminals: Micron's revenue guidance for next quarter is $18.7 billion. That's 37% sequential growth — in a single quarter — for a $400 billion company. For context, that kind of quarter-over-quarter acceleration is the sort of thing you see from scrappy SaaS startups, not mature industrial semiconductor manufacturers.
HBM is the engine behind these numbers. HBM chips carry margins above 60%, compared to roughly 50% for standard DRAM and around 40% for NAND. Every percentage of revenue that shifts from commodity memory to HBM is like Micron replacing a Honda Civic with a Porsche on the same assembly line.
Memory was historically bought on the spot market — a pure commodity play where customers bought what they needed when they needed it, and prices yo-yoed like a caffeinated toddler. Now customers are panic-buying years in advance, locking in supply through long-term agreements. If this structural change holds, it transforms Micron's revenue from "wildly unpredictable" to "recurring and contractual." Bulls argue this alone justifies re-rating the stock from a commodity cyclical to critical AI infrastructure.
To capture this demand, Micron is pouring $20 billion into capital expenditure in 2026, mostly for domestic fabs supported by U.S. government subsidies under the CHIPS Act. This includes a jaw-dropping $100 billion mega-fab in New York. They're not just riding the wave — they're building an aircraft carrier.
The Chart That Keeps Investors Up at Night
So if Micron is printing money at a pace that would make the Federal Reserve blush, why is its stock trading at a forward PE of 14? That's about half of Nvidia's, a third of Broadcom's, and roughly in line with what you'd pay for a regional bank or a mid-cap energy company. Why is the market treating a hypergrowth AI infrastructure play like it's a gas station?
Because of The Chart.
If you plot Micron's earnings history, you'll see a pattern that has repeated with the grim predictability of a horror movie franchise: massive profit spikes followed by devastating troughs. Boom. Bust. Boom. Bust. Every single cycle, without exception, for decades.
The memory market runs on supply and demand with all the subtlety of a see-saw. When supply is tight and prices are high, the Big Three print money and invest in capacity. When that capacity comes online, supply floods the market, prices collapse, and billions in profit evaporate overnight. It happened after the smartphone boom in 2016. It happened after COVID in 2023. And investors are dead certain it'll happen again.
In 2018, Micron's net income peaked at nearly $13 billion. The stock's forward PE collapsed to four. Not fourteen — four. The market was so convinced the boom was about to end that it essentially valued Micron's future profits at pocket-change multiples. And the market was right. The crash came. It always comes.
This is why the PE stays low no matter how spectacular the current numbers look. Investors aren't pricing in what Micron is earning today. They're pricing in the inevitable moment when Samsung, SK Hynix, and Micron all finish their capacity expansions at the same time and the market tips from shortage to glut. It's not a question of if — it's a question of when.
And Samsung is the wild card that keeps Micron bulls slightly nauseous. Samsung has been the underdog in HBM — their HBM3 chips reportedly failed to meet Nvidia's quality standards, effectively locking them out of the biggest customer. But they're not exactly sitting around feeling sorry for themselves. Samsung is developing HBM4 on a more advanced process node than either Micron or SK Hynix, and they've explicitly stated their strategy is to make HBM more affordable than anyone else.
If Samsung perfects their HBM4 process in late 2026 and decides to go full price-war mode, margins across the industry could compress fast. We've seen this movie before. Samsung wrote the screenplay.
Adding to the anxiety: several Micron directors and executives sold shares during the recent run-up. Insider selling during all-time highs isn't unusual, but when the people who've weathered these cycles before are quietly cashing out, it raises eyebrows. That said, at least one insider went the opposite direction — a former TSMC chairman bought over $7.8 million in shares at all-time highs, which is either a phenomenal conviction play or a very expensive way to express optimism.
The Trillion-Dollar Question: Commodity or Infrastructure?
Everything about Micron's investment case comes down to a single binary question: Is memory a commodity that will crash like it always does, or has AI transformed it into critical infrastructure that deserves a permanent re-rating?
Sir John Templeton famously warned that the four most dangerous words in investing are "this time it's different." But semiconductor expert Gavin Baker of Atreides Management makes a compelling case for why, maybe, this time actually kind of is.
Baker put Micron on the radar in 2024 when he predicted on the All-In Podcast that the stock would explode in 2025. It did exactly that. His thesis is that we're entering the "agent era" of AI, where models don't just retrieve information but actually go out and do things — book flights, manage portfolios, coordinate workflows. Products like Anthropic's Claude Cowork are early examples of this shift. If agents need to maintain persistent, complex memory states for millions of individual users simultaneously, the memory requirements become almost incomprehensibly large.
But not everyone agrees. Chamath Palihapitiya, an early investor in Groq (which Nvidia acquired for $20 billion), argues that inference workloads — the "thinking" that happens after training — don't require the massive memory pools that training does. Groq's chips use SRAM, an ultra-fast on-chip memory type that's faster than HBM but with far less capacity. These chips are still niche, but Nvidia's acquisition was a significant endorsement of the architecture.
A paper co-authored by two Google researchers — including Turing Award winner David Patterson — directly rebuts the SRAM bear thesis on Micron. Their argument: inference is fundamentally a memory bandwidth problem, not a throughput problem. While SRAM-only designs might have been feasible a decade ago, today's frontier models are simply too large to fit into on-chip memory. The data has to come from somewhere, and that somewhere is HBM.
The Math: Three Scenarios for Where This Goes
Analysts project Micron will earn approximately $42 per share in 2027 — up from roughly $8 this year. That's a fivefold increase in earnings in two years. Wall Street also thinks 2027 is when the memory super-cycle peaks, with flat revenue growth and EPS declining to $39 by 2028.
Nobody is seriously questioning whether Micron will continue to deliver massive growth over the next 12 months. The debate is entirely about what multiple to pay for it. And in Micron's case, that multiple tells you which future you believe in.
| Scenario | Forward PE | Implied Price | Thesis |
|---|---|---|---|
| Bear | 5× | ~$195 | Classic cycle repeats; glut arrives by late 2027 |
| Base | 10× | ~$390 | Cycle extends; HBM supply stays tight through 2028 |
| Bull | 15× | ~$585 | Permanent re-rating as AI infrastructure; agents drive structural demand |
The spread between the bear and bull case is enormous — a roughly 3× difference. That tells you everything about the level of disagreement baked into this stock. You're not just betting on a company; you're betting on a theory about the fundamental architecture of the AI future.
The Bottom Line (Or: What the Potato Baron Would Think)
Micron is one of the most fascinating companies in the world right now. It's a nearly 50-year-old survivor that outlasted Japanese conglomerates, Korean chaebols, and multiple near-death experiences — and is now sitting at the exact center of the most consequential technology shift since the internet. Its products are genuinely irreplaceable. Its financials are genuinely spectacular. Its forward valuation is genuinely cheap.
But cheapness isn't the same thing as undervaluation. If the memory cycle peaks in 2027 and Samsung floods the HBM market with aggressively priced chips, Micron's earnings could fall off a cliff, and that "cheap" PE of 14 could become a very expensive PE of 40 on collapsed earnings. This has happened before. Multiple times. To this exact company.
The bull case requires you to believe that something has structurally changed — that AI's appetite for memory is so insatiable, so exponentially growing, that the traditional boom-bust cycle has been broken. That's a big bet. Maybe it's the right one. But Sir John Templeton is whispering from the grave, and he's reminding you that the last fifty people who said "this time it's different" are currently working as cautionary tales in finance textbooks.
Either way, somewhere in Idaho, the ghost of J.R. Simplot is looking down at the semiconductor company he funded with frozen potato money and thinking: not bad for a bet on the local boys.