META — Meta Platforms

TL;DR

Meta’s investment thesis in the corpus is less about its core ads business (not covered) and more about its silicon strategy: MTIA (Meta Training and Inference Accelerator) is a chiplet-based, inference-only custom ASIC roadmap running on a rapid ~6-month cadence — four chips (300/400/450/500) shipped or planned within ~two years. The training chip was scrapped; training stays on Nvidia/AMD. The ROIC logic is straightforward — recommendations/GenAI inference on $150B+ annual ad revenue at scale; every dollar of custom silicon displacing merchant ASIC is pure margin. The wild card from the corpus: Meta is reportedly evaluating Google TPUs for training Llama models if Google can offer better price/performance — signaling that even custom-silicon companies stay pragmatic.12

Business

Social media and advertising platform (Facebook, Instagram, WhatsApp, Threads). Primary revenue: digital advertising — highly targeted, closed-loop attribution on a ~3.3B daily active user base. AI investment: heavy capex on training (Nvidia/AMD GPUs) and inference (custom MTIA ASICs, Nvidia/AMD mix). Reality Labs (VR/AR) is a sustained capex drag. Meta is a hyperscaler-scale infrastructure operator building its own silicon for inference workloads alongside merchant chips.12

Thesis

  • MTIA: chiplet-based inference ASIC on a rapid cadence. Four MTIA generations (300/400/450/500) shipped or planned within ~two years, enabled by chiplets. MTIA 450 and 500 are optimized for GenAI inference first. The cadence — roughly 6-month intervals — is faster than traditional ASIC development cycles and made possible by chiplet reuse. Meta acquired Rivos (RISC-V AI startup, Sep 2025) that helped design MTIA 1i/2i, bringing ~100+ engineers in-house.1
  • Scrapping training chip was rational. Meta publicly scrapped its custom training chip (addressed on Broadcom’s earnings call). Irrational Analysis frames this as “a nothing burger” — training compute is commoditizing across Nvidia and AMD; inference is where Meta has workload-specific advantages (recommendation algorithms, GenAI serving at scale). Custom silicon for inference = targeted capex with clear ROIC.1
  • Multi-vendor inference stack. Meta’s Andromeda ads retrieval engine runs on Nvidia, AMD, and MTIA simultaneously — a multi-vendor stack confirmed. This gives Meta optionality and bargaining power across silicon vendors while MTIA scales up.
  • CUDA-compatible software. MTIA runs a PyTorch/Triton/vLLM-compatible stack with a claimed CUDA-compatible API layer, reducing migration friction.
  • TPU optionality as leverage. Citrini (Nov 2025): “META reportedly wants TPUs for training, not just inference.” If Google offers TPUs at better price/performance for serving Llama models, Meta is “rational enough to use them.” This creates a bidding dynamic that pressures Nvidia pricing for Meta’s workloads.2

Risks

  • Core ads business not covered in corpus. The primary revenue stream (digital advertising) is not analyzed in the research corpus used here. All dossier content reflects the silicon/AI infrastructure angle.
  • Reality Labs capex drag. VR/AR losses are material and ongoing; no corpus content on trajectory.
  • Merchant silicon dependency for training. Meta’s training capability remains entirely dependent on Nvidia (H100/H200/B200) and AMD (MI300X/MI350X). Any Nvidia supply constraints or AI training compute shortages affect Meta first.
  • TPU evaluation signals training compute dissatisfaction. If Meta is exploring Google TPUs for Llama training, it implies current Nvidia/AMD solutions are not competitive enough on price/performance for large-scale model training — a sign of compute cost pressure, not just strategic optionality.2

Recent catalysts

  • 2026-04-13 — FundaAI weekly: Meta’s Muse Spark model “re-entered the top tier” in multimodal/reasoning benchmarks. Meta framed as entering “a new acceleration phase (org reset, stepped-up capex, next-gen models in flight).”3
  • 2026-03-12 — Chipstrat, Meta’s MTIA Roadmap: MTIA 300/400/450/500 cadence disclosed; chiplet architecture, Rivos acquisition (Sep 2025) detailed; Andromeda multi-vendor stack confirmed; training chip cancellation framed as rational.1
  • 2025-11-27 — Citrini Research, Carving Up the TPU: Meta reportedly evaluating Google TPUs for Llama training; frames Meta as a potential TPU customer willing to defect from Nvidia for better economics.2

Second-order reads

  • 2026-03-12 — Chipstrat, Meta’s MTIA Roadmap — MTIA inference ASICs reduce Meta’s share of Nvidia merchant GPU spend → negative for NVDA at the margin at Meta; positive for AVGO if MTIA uses Broadcom-designed elements (not confirmed in corpus).
  • 2025-11-27 — Citrini Research, Carving Up the TPU — Meta’s TPU interest is the same dynamic (hyperscalers building/sourcing non-Nvidia silicon) covered in NVDA and AMD dossiers; if Google can externalize TPU capacity to Meta for Llama training, it validates Google’s silicon-as-a-service model.

Valuation & positioning

Pending. Core advertising and earnings metrics not covered in the corpus. The corpus frames Meta’s silicon strategy as ROIC-positive (inference margin expansion) with a training-chip mistake avoided — but provides no multiple framework.

Sources

NVDA — primary training GPU supplier; MTIA inference ramp reduces Meta’s Nvidia dependence at the margin AMD — MI300X/MI350X in Meta’s multi-vendor inference stack alongside MTIA AVGO — custom ASIC design partner (not confirmed in corpus for MTIA specifically) APP — AppLovin; competing ads platform and direct benchmark for AI-driven ad efficiency

Footnotes

  1. Chipstrat — Meta’s MTIA Roadmap — 2026-03-12 2 3 4 5

  2. Citrini Research — Carving Up the TPU — 2025-11-27 2 3 4 5

  3. FundaAI — Weekly: Collyer Bridge Joins FundaAI — 2026-04-13