COMPANY
Amazon
Overview
Amazon — through Amazon Web Services (AWS) — is one of the three hyperscalers driving the custom-silicon shift in AI compute. AWS designs its own ARM-based general-purpose CPUs (the Graviton line) and its own AI training and inference accelerators (the Trainium and Inferentia lines). As of 2026, AWS positions Graviton4 and Trainium3 as production-ready alternatives to merchant NVIDIA GPUs for the largest enterprise AI workloads.
Timeline
- 2026-04-08-AI-Digest — AWS included as a launch partner in Anthropic’s Project Glasswing security-research consortium for restricted access to Claude Mythos Preview.
- 2026-04-09-AI-Digest — Amazon announces that Uber is expanding its AWS contract to migrate Trip Serving Zones to AWS Graviton4 and to begin training AI models on AWS Trainium3 in a pilot. Uber joins Anthropic, OpenAI, and Apple as anchor customers AWS cites for its custom-chip lineup. The deal is treated as one of the strongest enterprise validations to date that AWS custom silicon can handle latency-critical and training-grade AI workloads at scale.
- 2026-04-10-AI-Digest — CEO Andy Jassy discloses in Q1 2026 shareholder letter that AWS AI revenue run rate has crossed $15B (~10% of AWS’s $142B total run rate) and custom chips portfolio (Graviton, Trainium, Nitro) exceeds $20B annual run rate. Jassy defends projected $200B in 2026 capex as “not investing on a hunch.” The $15B figure is the clearest signal yet that hyperscaler AI spending is translating into measurable top-line growth.
Key Developments
-
Graviton4 in Production: Uber’s migration of latency-critical rider–driver matching onto Graviton4 demonstrates the maturity of AWS’s ARM-based general-purpose CPUs for production AI infrastructure.
-
Trainium3 Training Pilots: AWS’s third-generation training accelerator is now seeing pilot adoption from major enterprises (Uber) for training AI models — a workload class where NVIDIA has historically faced little credible competition.
-
Anchor Customer Roster: AWS now publicly cites Anthropic, OpenAI, Apple, and Uber as anchor customers for its custom AI silicon — a roster that materially changes the “everyone uses H100s” narrative of the 2024–2025 era.
-
Project Glasswing Security Partner: AWS is also a launch partner in Anthropic’s gated Claude Mythos Preview security-research program, reflecting the breadth of the AWS–Anthropic relationship across compute, security, and platform integration.
-
$15B AI Revenue Milestone: The Q1 2026 disclosure of a $15B AI revenue run rate makes AWS the most quantified proof point that hyperscaler AI capex is generating real top-line return, not just infrastructure burn.
-
$20B Custom-Chip Revenue: The custom silicon portfolio revenue exceeding $20B positions Amazon as the largest vertically integrated chip-to-cloud AI provider by revenue.
Related
See also: Anthropic, Uber, NVIDIA, Apple, OpenAI, Broadcom, Google, MOC - AI Infrastructure, MOC - Major Companies.