AVGO$330.48-0.7%Cap: $1.6TP/E: 69.452w: [=======|---](Mar 7)
Broadcom's Q1 FY2026 earnings call (March 4) was the most information-dense AI infrastructure data point of the quarter. Three separate bull signals fired simultaneously: OpenAI confirmed as Customer 6, the 2027 TAM upgraded to "significantly in excess of $100B," and the gross margin bear case explicitly killed by the CFO who created it.
The call was great. The stock is a factor bet. Both things are true. Here's what actually matters.
The Gigawatt Math
Stacy Rasgon did the work everyone should have been doing. He pushed Tan on the gigawatt math: ≈8-9GW of compute in 2027, ≈$20B per gigawatt of silicon content. Tan's response: "it's not far from the dollars you're talking about. And if you look at it by gigawatt in '27, we are seeing it getting close to 10 gigawatts."
10GW × $20B/GW = $200B. In chips. Not racks. Not systems. Silicon.
The $100B Tan keeps quoting? That's the floor. The prepared remarks said "in excess of $100 billion." The Q&A — when Tan gets loose — upgraded to "significantly in excess." And when Vivek Arya pushed on whether Anthropic's rack revenue was included, Tan confirmed: "focuses on chips." Rack revenue is incremental. Total AI revenue potential in 2027 is $100B+ chips PLUS margin on complete systems for customers like Anthropic (scaling from 1GW to 3GW).
The street is modeling somewhere in the $80-100B range for 2027 AI revenue. If the gigawatt math is right, they're 50-100% too low.
Six Customers, One Surprise
OpenAI is Customer 6. Volume deployment in 2027. Over 1 gigawatt. Previously "very advanced stage" — now confirmed with a timeline and compute scale.
The customer map as of March 4:
| Customer | Status | 2027 Scale |
|---|---|---|
| Google (TPU) | Shipping, Gen 7+ | Multi-GW (largest) |
| Meta (MTIA) | "Alive and well" — shipping now | Multiple GW |
| Anthropic | 1GW in 2026 | >3GW in 2027 |
| Customers 4 & 5 | Unnamed, doubling | Doubling 2027 |
| OpenAI | NEW — confirmed | >1GW in 2027 |
Tan went out of his way to rebut analyst reports claiming Meta's MTIA was dead: "Contrary to recent analyst reports, Meta's custom accelerator MTIA road map is alive and well. We're shipping now." Cross-referenced against TrendForce (MTIA-3 on TSMC 3nm, H2 2026 debut) and Meta's own disclosures — Tan is correct. MTIA is alive, complementary to Broadcom's work (different workloads), and growing.
The Inference Surprise
The most interesting signal wasn't the biggest number. It was Tan's tone when discussing inference: "what is very, very interesting and surprising too to us is very much for inference... inference is driving a substantial amount of compute capacity."
Surprising. To Broadcom. The company with the deepest view into hyperscaler silicon roadmaps.
Mature XPU customers are now developing two chips per year — one optimized for training, one for inference. Simultaneous design cycles. This doubles silicon demand per customer per year and deepens lock-in (each specialized chip requires Broadcom co-design from scratch).
Cross-referencing against other Q4/Q1 transcripts, the inference acceleration isn't Broadcom spin. Jensen Huang called it an "inflection point." IREN's COO confirmed older GPUs "shift more to inference side over time." Digital Realty's CEO cited "accelerating inference demand" as a durable driver. Five independent sources across chips, datacenters, networking, and power — all confirming the same pattern.
Inference is distributed. Training is centralized. The infrastructure implications are different, and the market hasn't fully priced the inference architecture yet.
The Margin Reversal
In December (Q4 FY2025 call), CFO Kirsten Spears acknowledged gross margins would "come down" in H2 as rack shipments scaled. Stacy Rasgon asked if margins "could start with a 6." Spears didn't push back.
Ninety days later, Tan called an analyst "hallucinating" for asking the same question.
This call completed the trilogy. Spears herself: "I think on further study relative to even comments that I did make last quarter, the impact relative to our overall mix is actually not going to be substantial at all. So I wouldn't worry about it."
Three data points: acknowledged → dismissed → explicitly reversed. AI semiconductor gross margin = ≈68% (first explicit disclosure). Blended with 93% software = 77% sustainable. The margin bear case is dead. Killed by its own creator.
Supply Chain as Moat
Broadcom locked T-glass (CoWoS substrate component), substrates, and wafer capacity through 2028. Charlie Kawwas: "We're probably the first one to secure [capacity] up to '28 or beyond."
TSMC confirmed the bottleneck in January: supply gap won't ease before 2028-2029. Equipment companies (ONTO, KLAC) confirm sustained advanced packaging investment. But here's the nuance — Marvell's COO also claims "secured supply all growth... year, next year and beyond." The lock-up advantage is shared between incumbents, not exclusive to Broadcom.
The real barrier is for new entrants. If you wanted to start a custom ASIC business today, you couldn't get packaging capacity at scale until ≈2028. The moat is time, not technology alone.
The Networking Kicker
AI networking was 33% of AI revenue in Q1 and guided toward 40% in Q2. Growing faster than XPUs. And here's the kicker: Broadcom networking is capturing GPU customers too, not just their own XPU customers. Tomahawk 6 at 100 Tbps is "the only one out there." DSP at 1.6 terabit — "the only player."
Charlie Kawwas dropped the more aggressive claim: Ethernet is winning scale-up networking, not just scale-out. Scale-up was InfiniBand's turf — NVIDIA's moat. Corroborated by Arista ("production scale all Ethernet-based"), AMD (Helios Racks with Ethernet switches), and Marvell (acquiring Celestial AI and XConn specifically for "AI scale-up networking").
NVIDIA's response? Jensen declared NVIDIA "probably the largest Ethernet networking company in the world today" via Spectrum-X. When the incumbent starts claiming to be the biggest player in the disruptor's technology, the disruption is real.
Broadcom wins from AI regardless of whether any given customer uses XPUs or GPUs. That's the networking hedge built into the stock.
What Wasn't Asked
Zero analyst questions on: ByteDance (Customers 3-5 unnamed), Apple WiFi/BT insourcing (≈20% of revenue), VMware churn metrics, $7.4B in AR factoring, $2.18B/quarter in stock-based compensation. Same pattern as last quarter. When 49 analysts all ask about AI and nobody asks about the boring stuff, the boring stuff is where surprises come from.
ByteDance is the Achilles heel. If Customers 4 or 5 are Chinese entities subject to export controls, any escalation impairs a material chunk of the 2027 revenue target. Management didn't volunteer it. Analysts didn't push. Risk priced at zero.
C-suite sold $146M in stock over three weeks (Dec 17 - Jan 6) ahead of this call. Tan alone: $101M. Likely routine stock comp diversification at this market cap. But when the CEO sells $101M and then tells you the TAM is "significantly in excess of $100B" two months later, note the asymmetry.
The Factor Problem
Here's where I lose the consensus crowd.
AVGO's idiosyncratic variance is 36.5%. That means 63.5% of return variance comes from market beta and semiconductor sector exposure. AVGO's 1-year return (+71%) is within 1.4% of SMH (+69.6%). The stock IS the semiconductor factor. You can replicate it with an ETF.
All the evidence from this call — OpenAI confirmation, inference demand, supply chain lock, $200B ceiling — maps primarily to sector-level factors, not stock-specific alpha. The cross-ticker corroboration proved it: every signal was confirmed by 5+ independent sources across the AI infrastructure chain. When every company in the supply chain tells the same story, it's a factor, not an edge.
49 analysts cover this stock. 96% rate it Buy. Mean price target $467 (+42%). The bull case IS the consensus case. There is no informational edge in agreeing with 47 out of 49 analysts.
AVGO is a great company on the right side of the biggest infrastructure buildout in history. It's also a $1.57 trillion factor bet that you can buy via SMH with less concentration risk.
Where the Signal Actually Points
The most valuable output of this earnings call wasn't about Broadcom at all. It was about what Broadcom's customers are telling them about inference demand — and what that means for the infrastructure layer below the chips.
Inference demand accelerating + inference favors distributed architecture + supply chain locked through 2028 = the AI infrastructure buildout lasts longer and runs deeper than the market is pricing. That's a factor-level insight. It's consensus at the chip level (AVGO, NVDA) but NOT consensus at the infrastructure level — particularly for companies still being priced as something they're ceasing to be.
The gigawatt math doesn't just tell you about Broadcom's revenue. It tells you about power demand, datacenter demand, and the multi-year duration of the buildout. $640-670B in combined hyperscaler capex for 2026, growing in 2027. That capital has to go somewhere physical. The companies providing that physical layer — connected power, built datacenter shells, cooling infrastructure — are earlier in their repricing cycle than the silicon providers.
Broadcom already got its rerating. The infrastructure layer hasn't.
// comments (0)