COMPANY

DeepSeek

companytopic-note

Overview

DeepSeek is a Chinese AI research lab known for producing frontier-class models at extremely low training costs. The company has established a pattern of cost-efficient large-scale model development and is notable for being among the first to deploy frontier AI on non-NVIDIA hardware, specifically Huawei’s Ascend chips.

Timeline

  • 2026-04-10-AI-Digest — DeepSeek V4 enters final pre-release validation as a 1 trillion parameter mixture-of-experts model with ~37B active parameters per response, handling text, image, and video natively. Reuters confirms it will be the first frontier AI model trained and deployed on Huawei Ascend 950PR chips. DeepSeek introduces “Fast Mode” and “Expert Mode” product tiers, formalizing a paid service for the first time. Estimated training cost of ~$5.2M. Release expected in the last two weeks of April 2026.

Key Developments

  1. Extreme Cost Efficiency: DeepSeek’s estimated ~$5.2M training cost for a 1T-parameter frontier model is among the cheapest ever reported, consistent with the lab’s established pattern of doing more with less.

  2. Huawei Silicon Deployment: V4 as the first frontier model on Huawei Ascend 950PR chips represents a geopolitically significant proof point — if competitive, it demonstrates that US export controls on NVIDIA have shifted the supply chain rather than blocked Chinese frontier AI development.

  3. Business Model Evolution: The introduction of paid “Fast Mode” and “Expert Mode” tiers marks DeepSeek’s transition from a fully free research lab to a commercial entity, likely driven by rising inference costs.

  4. First Outside Capital Round (April 2026): The $300M raise at a $10B+ valuation — DeepSeek’s first ever external fundraise — is a structural concession that frontier training/talent costs have outgrown High-Flyer’s solo backing. It also sets the commercial baseline for Chinese open-weights labs more broadly: at current economics, even cost-efficient frontier labs need external capital to keep training.

Timeline (continued)

  • 2026-04-11-AI-Digest — DeepSeek formally launches “Fast Mode” and “Expert Mode” in the chat service, formalizing its first paid product tier as V4 launch preparation. V4 remains in final pre-release validation on Huawei Ascend 950PR chips, with release expected in the last two weeks of April. The r/LocalLLaMA community runs speculative performance comparisons against Gemma 4 and Qwen 3.5 at the 37B active-parameter tier.
  • 2026-04-12-AI-Digest — DeepSeek V4 nears late-April launch with 1M-token context window powered by “Engram” conditional memory system; test interface reveals three product tiers (Fast, Expert, Vision). DeepSeek reportedly gave Huawei exclusive early hardware access while denying NVIDIA early access — a deliberate geopolitical signal.
  • 2026-04-13-AI-Digest — DeepSeek V4 pre-release tracking continues on r/LocalLLaMA; debate centers on whether 37B active parameters on Huawei Ascend 950PR can match NVIDIA-optimized inference latency. Community skeptics cite historical Ascend throughput issues while optimists note the $5.2M training cost makes V4 the most cost-efficient frontier model ever trained.
  • 2026-04-14-AI-Digest — Final-stretch V4 speculation dominates r/LocalLLaMA: prevailing specs are ~1T total / 32–37B active MoE, 1M-token context, tiered Fast/Expert/Vision product surface with Expert likely the first paid SKU. Bulk orders placed by Alibaba, ByteDance, and Tencent pushed Ascend 950PR spot prices up ~20% in weeks — the community reads this as a leading indicator of launch imminence. Launch window: last two weeks of April.
  • 2026-04-15-AI-Digest — Founder Liang Wenfeng reconfirms late-April V4 window via internal communication; Reuters’ April 4 report that V4 runs on Huawei Ascend 950PR silicon continues to hold. Community logistics debate shifts from speculation to which quantizations will drop day-one (Q4_K_M and Q8_0 likely) and whether Huawei’s Ascend inference stack will be open-sourced alongside the weights. The strategic framing: if V4 hits 80% of Claude Opus 4.6 / GPT-5.4 at competitive latency on Ascend, the “export controls as capability cap” premise of US policy collapses.
  • 2026-04-16-AI-Digest — Final-stretch V4 watch continues on r/LocalLLaMA: consensus on 1T total / 32–37B active MoE, 1M-token context, Fast/Expert/Vision tiers with Expert as the first paid SKU. Alibaba/ByteDance/Tencent bulk Ascend 950PR orders and the ~20% spot-price jump remain the most credible leading indicator of launch imminence. Open questions unchanged: Ascend inference latency parity with NVIDIA, and whether 1M context is a real deployment spec or marketing. V4 would be the first frontier model released with explicit product-tier price discrimination built into launch day.
  • 2026-04-18-AI-DigestDeepSeek opens to outside capital for the first time — in talks to raise at least $300M at a $10B+ valuation (per The Information, reported April 17), its first external fundraise since founding. Until now fully funded by High-Flyer Capital Management, DeepSeek had publicly rejected outside investors through 2024–2025. Domestic Chinese investors are the most likely participants; US venture firms face regulatory pressure and national-security review risk that effectively bars meaningful participation. The $10B valuation sits an order of magnitude below Anthropic/OpenAI/Cursor-class pricing and reflects DeepSeek’s deliberate under-pricing more than a market constraint. Strategic read: a concession that frontier-training compute and talent costs have moved past what High-Flyer alone can sustain — the clearest sign yet that the “you don’t need $10B to build a frontier model” narrative DeepSeek embodied in early 2025 has reverted closer to the cohort median. Lands two days after Stanford’s 2026 AI Index report showed the Arena-leaderboard US-China gap down to 2.7 points.