AMD — Advanced Micro Devices

TL;DR

AMD is the credible #2 in both datacenter AI accelerators and server CPUs. The near-term thesis is a two-front story: EPYC server CPUs are taking share from Intel on the back of RL/vibe-coding driven CPU demand, while the AI GPU road runs through the MI355X (competitive on FP8 inference today) and the MI450X/Helios rack-scale system (H2 2026 samples, Q2 2027 production). The key risk: AMD software remains 6+ months behind Nvidia on the composability of FP4 + disaggregated inferencing + wide expert parallelism — the combination that every top-tier lab already runs.1

Business

Designs x86 server CPUs (EPYC: Turin, Venice coming 2026) and Instinct AI accelerators (MI300X → MI325X → MI355X → MI450X/Helios roadmap). Fabbed at TSMC (N3 for MI350X; XCD tile on N2 for MI400). Server CPUs paired with Instinct GPUs in compute trays (1 Venice CPU to 4 MI455X GPUs). Hit $10B quarterly revenue for the first time in early 2026; Lisa Su reaffirmed 35% revenue CAGR target.2

Thesis

  • EPYC server CPU momentum. Turin achieving 7:1+ socket consolidation ratios vs. old Intel Cascade Lake; AMD expects server CPU TAM to grow “strong double digits” in 2026. Reinforcement-learning workloads and vibe coding are driving unexpected CPU demand — AI labs are scrambling for CPU allocation, competing directly with cloud providers for commodity x86 servers. Intel, caught off-guard, is raising Xeon prices as AMD gains share.3
  • MI355X is competitive today on FP8 inference. On FP8 disaggregated inference with SGLang + MoRI, AMD MI355X matches Nvidia’s B200 (SGLang). AMD’s inference software improved 2× in throughput in under 2 months (Dec 2025 – Jan 2026). Up to 10× better performance vs. MI300X on disaggregated SGLang DeepSeek.1
  • MI450X/Helios is the rack-scale inflection. Engineering samples of MI455X UALoE72 (72-GPU scale-up via UALoE, analogous to Nvidia NVL72) in H2 2026; production tokens in Q2 2027. First AMD system with a Nvidia-competitive scale-up domain size. Opens AMD to the high-end training segment currently gated by Nvidia’s NVLink.14

Risks

  • Software composability deficit. FP4 + disaggregated inferencing + wide expert parallelism (the combination all top labs run) performs poorly together on AMD — MI355X “gets absolutely mogged” by B200 in FP4 disagg scenarios. AMD ATOM inference engine has zero production customers. SemiAnalysis: AMD is 6+ months behind Nvidia on this.1
  • MI450 timing gap. Helios inflection not until Q4 2026; investors don’t see revenue until early 2027. Long wait in a market where Nvidia ships Rubin in 2026.2
  • No rack-scale system today. Nvidia GB200 NVL72 is rack-scale deployed now; MI455X UALoE72 is engineering samples H2 2026, production Q2 2027. Multi-quarter lag in the highest-value training tier.14
  • China uncertainty. Q4 2025 beat included ~$390M in MI308 sales to China — regulators could restrict this line. Strip that out and the beat looks different.2
  • N2 process exposure. XCD tile for MI400 moving to N2; N2 yield ramp is early and a schedule risk if TSMC N2 defect density doesn’t improve on plan.5

Recent catalysts

  • 2026-03-23 — Chipstrat: AMD still lacks competitive scale-up until MI450/Helios; disaggregation opens the market to AMD in theory but scale-up gap is the binding constraint.4
  • 2026-03-12 — Great AI Silicon Shortage: AMD MI350X on TSMC N3, MI400 XCD on N2; memory capacity +50% from MI350 to MI400.5
  • 2026-02-16 — InferenceX v2: MI355X competitive on FP8 disagg SGLang; composability fail on FP4+disagg+wideEP; ATOM engine zero customers; MI455X UALoE72 production tokens Q2 2027.1
  • 2026-02-09 — CPUs are Back: Venice CPU paired with MI455X in 2026 compute trays; AMD server CPU TAM growing strong double digits; Turin 7:1 socket consolidation ratios.3
  • 2026-02-06 — AMD hits $10B quarterly revenue; Lisa Su reaffirms 35% revenue CAGR; MI450 inflection not until Q4 2026; stock sold off ~15% on lack of near-term GPU catalyst.2

Second-order reads

  • 2026-02-09 — SemiAnalysis, CPUs are Back — RL/agentic workloads driving CPU demand; Intel raising prices as AMD gains share → negative for INTC, positive for AMD server revenue and for MU (server DRAM demand).
  • 2026-02-16 — SemiAnalysis, InferenceX v2 — AMD software improving rapidly but still behind; Marvell lost Trainium3 → negative for MRVL; AMD inference progress = potential incremental demand for CRDO ALAB (PCIe/CXL AECs in AMD systems).

Valuation & positioning

Pending. No specific multiple target in corpus — gather from Irrational Analysis and FundaAI as earnings previews land.

Sources

NVDA — primary accelerator competitor; NVL72 is the scale-up bar AMD must match INTC — x86 server CPU competitor; AMD gaining share on Intel weakness AVGO — AVGO builds Google TPU (N3), Trainium3 supply chain; AMD’s other competitive threat MRVL — Marvell lost Trainium3 socket to Alchip; MI450 interconnect ecosystem CRDO ALAB — PCIe/CXL AEC pull-through in AMD-based systems MU — DRAM supplier; server CPU demand + HBM3E for MI series TSM — foundry; N3 for MI350X/MI400 AID; N2 for XCD tile

Footnotes

  1. SemiAnalysis — InferenceX v2: NVIDIA Blackwell vs AMD vs Hopper — 2026-02-16 2 3 4 5 6

  2. Chipstrat — The MI450 Waiting Game — 2026-02-06 2 3 4

  3. SemiAnalysis — CPUs are Back: The Datacenter CPU Landscape in 2026 — 2026-02-09 2

  4. Chipstrat — The Multi-Silicon Era Is Here — 2026-03-23 2 3

  5. SemiAnalysis — The Great AI Silicon Shortage — 2026-03-12 2