I still remember waiting for our family computer to wake up. Mid-1990s, the glowing Windows logo would materialize from a black screen while the hard drive ground away beneath it. My most vivid memory came when I convinced my parents to buy a Star Wars PC game. The box said it needed 16MB of RAM. We had 8MB. When I clicked the icon, the screen flickered, the speakers made a sad, choked sound, and the computer crashed. That was my first lesson that a computer was only as capable as the components powering it behind the scenes.
Born in 1988, I straddled two worlds: the analog childhood of the '80s and the digital adolescence of the 2000s. The Dell laptop I carried to college in 2006 made that 1996 machine look like a toy. Now, the phone in my pocket holds roughly 500 times more memory than that Star Wars game required.
From the Moon to our Pockets
When Neil Armstrong stepped onto the moon in 1969, the Apollo Guidance Computer weighed 70 pounds and contained just 72 kilobytes of total memory. Your smartphone today would make that computer look like a calculator. Memory milestones came fast in between:
- 1969: The Apollo Guidance Computer had 72KB of RAM, enough to land on the moon
- 1981: The first IBM PC shipped with 16KB, less than a quarter of the Apollo computer
- 1990: A high-end PC had 4MB, a 250x jump in just nine years
- 1995: 8MB became standard, doubling again in five years
- 2001: 256MB was the new benchmark, a 32x leap in just six years
- 2006: 1GB became standard, quadrupling again in five years
- 2026: A base iPhone ships with 8GB, roughly 111,000x more than the computer that landed us on the moon
The Engine Behind AI
“High Bandwidth Memory,” or HBM, is the fuel powering today's AI boom. To put it in perspective, that Star Wars game that crashed our family computer needed 16MB to run. HBM moves that same amount of data roughly 125 million times every single second. Traditional memory was never designed for anything close to that.
HBM is also far harder to make. Standard memory chips are manufactured flat, one layer at a time. HBM stacks multiple chips vertically and wires them together, like building a skyscraper instead of a ranch house. That process takes roughly four times more factory capacity per chip, and as AI demand surges, the shelves empty fast. That scarcity has made memory a strategic resource, with governments and tech giants competing for it the way the world once competed for oil.
The Market Story
Memory companies have been among the top performers in the market over the past year. Here's a look at the 2025 returns of the biggest players in memory:

We hold Micron in our Aggressive Growth strategy, which is why it's the name we're focusing on here. We identified it early in this cycle and initiated a position under $90 per share in late Q3 2024. The thesis was straightforward: AI infrastructure would be the defining capital expenditure story of 2025 to 2027, and memory was the most underappreciated bottleneck in the supply chain. While the market was obsessing over chip makers, a quieter "picks and shovels" opportunity was hiding in plain sight.
Micron was ramping up its newest memory faster than expected and had locked in supply agreements with Nvidia. In their most recently reported quarter, Micron posted $13.6 billion in revenue, up 57% year-over-year, with gross margins jumping from 22% to over 50%. Their high-bandwidth memory is completely sold out for all of 2026.
MU has since run past $450 per share from our entry near $90 in late Q3 2024. Put simply, every $10,000 invested has grown to roughly $50,000.
After an explosive move in a relatively short timeframe, disciplined portfolio management means asking hard questions, even about your best performers. At $90, Micron had a wonderful risk/reward profile. North of $450, the market is already expecting everything to go right. A single earnings miss or a change in outlook could erase months of gains quickly. We're actively evaluating the position and may trim or even exit it this year, not because the memory story is over, but because managing risk on a 400% winner is just as important as finding one.
TurboQuant and Jevons’ Paradox
On March 25, 2026, Google released a paper on "TurboQuant," a compression method that reduces AI memory usage by 6x while making operations 8x faster. Think of it as discovering you can run that Star Wars game on 1MB instead of 16MB, without losing a single frame of gameplay. Memory stocks dropped sharply on the news, with Cloudflare's CEO calling it "Google's DeepSeek moment." It's worth noting, TurboQuant is still a lab breakthrough, not yet deployed at scale, but as we know, the market is a forward-looking pricing mechanism where the future gets priced in today.
On the surface, the math seems simple: if AI needs 6x less memory to run, demand for memory chips should fall. History, however, tells a very different story.
In 1865, economist William Jevons noticed something counterintuitive: as steam engines became more fuel-efficient, Britain didn't use less coal, it used far more, because efficiency made coal-powered machinery affordable for everyone. The same pattern has played out with memory. In 1996, 8MB of RAM cost roughly $100 (about $200 in today’s dollars). That same 8MB today comes bundled by the gigabyte in devices costing less than a family dinner. Yet we use more memory than ever, because we discovered entirely new things to do with it, streaming video, video calls, and smartphones that recognize your face. Now AI can hold a conversation, write code, and help diagnose diseases. The pattern is the same: lower the cost, expand the possibilities.
TurboQuant makes AI cheaper to run, which means AI in every industry, every device, and every business. More AI means more data centers, and more data centers mean more memory. We've seen this movie before, and it doesn't end with less demand. Citi Research put it simply: "Historically, cheaper technology has mostly increased the demand for more technology. We see AI as no different."
Bottom Line
The Apollo Guidance Computer navigated humans 239,000 miles through space on 72 kilobytes of memory, less than what that Star Wars game needed just to launch. Memory has come a long way, and so has the opportunity it represents. We identified Micron early, bought it near $90, and watched it grow to $450+, roughly a 5x return. Now our job shifts from conviction to discipline, knowing when to protect what we've built is just as important as building it in the first place.
Disclosures
This material has been prepared for informational purposes only and should not be construed as a solicitation to effect, or attempt to effect, either transactions in securities or the rendering of personalized investment advice. This material is not intended to provide, and should not be relied on for tax, legal, investment, accounting, or other financial advice. You should consult your own tax, legal, financial, and accounting advisors before engaging in any transaction. Asset allocation and diversification do not guarantee a profit or protect against a loss. All references to potential future developments or outcomes are strictly the views and opinions of Richard W. Paul & Associates and in no way promise, guarantee, or seek to predict with any certainty what may or may not occur in various economies and investment markets. Past performance is not necessarily indicative of future performance.