MODEL
Gemma 4
Gemma 4
Google DeepMind’s most capable open-weight model family, released April 2, 2026 under the Apache 2.0 license. Successor to the Gemma series, distinct from the Gemini proprietary model line.
Key Specs
- Four sizes: E2B, E4B, 26B MoE, and 31B Dense
- 31B Dense ranks #3 on Arena AI text leaderboard among open models
- All variants natively process video and images at variable resolutions
- E2B and E4B add native audio input for speech recognition
- Apache 2.0 license — fully permissive for commercial use
Timeline
-
2026-04-04-AI-Digest — Google DeepMind launches Gemma 4 under Apache 2.0 with four sizes; 31B Dense ranks #3 among open models on Arena AI leaderboard. Community benchmarks on r/LocalLLaMA show competitive performance with Qwen 3.6 and Llama 4 at similar parameter counts. Successful local deployment via Ollama on consumer hardware confirmed.
-
2026-04-05-AI-Digest — Apache 2.0 licensing confirmed as strategic shift from restricted-use; 400M+ cumulative downloads across generations; community reports successful fine-tuning with LoRA on consumer hardware (RTX 4090 for 26B variant); benchmarked against Llama 4 and Qwen 3.6.
-
2026-04-06-AI-Digest — Android AICore Developer Preview announced; Apache 2.0 driving adoption past 400M downloads; #3 on Arena AI leaderboard.
-
2026-04-07-AI-Digest — Gemma 4 referenced in community discussions around open model landscape alongside DeepSeek V4
-
2026-04-07-AI-Digest — Gemma 4 mentioned in community context as researchers design peer preservation replication experiments.
-
2026-04-09-AI-Digest — With Llama effectively retired from frontier open-weights competition after Meta’s Muse Spark closed-source pivot, r/LocalLLaMA threads now treat Gemma 4 31B and Qwen 3.5 as the new top of the open-weights stack. Gemma 4 31B wins on multimodal, long-context retention, multilingual, and structured output (JSON/markdown tables); Qwen 3.5 still wins on coding and tool calling with hybrid thinking mode. Both fit on a 24 GB RTX 4090 at 4-bit quantization.
-
2026-04-12-AI-Digest — One week post-launch, r/LocalLLaMA sentiment consolidates around Gemma 4 31B Dense as the strongest all-around open model. Quantized 31B vs. Qwen 3.5 comparisons on a single RTX 4090: Gemma wins multimodal, structured output, long-context; Qwen retains the coding/tool-calling edge. Community practical consensus: run both with a router.
-
2026-04-13-AI-Digest — Gemma 4’s Apache 2.0 licensing highlighted as the key differentiator changing the open-model calculus; 31B Dense outperforms Llama 4 on AIME 2026 Math (89.2% vs 88.3%), LiveCodeBench v6 (80.0% vs 77.1%), and GPQA Diamond (84.3% vs 82.3%), delivering a model that’s both technically competitive with 20x-larger models and legally frictionless for enterprise deployment.
-
2026-04-14-AI-Digest — Gemma 4 remains a top r/LocalLLaMA community default alongside Qwen 3 Coder and Llama 4 in the April Hugging Face momentum tracker. Community consensus has stabilized into a Llama Stack + Gemma 4 + Qwen 3 Coder + DeepSeek V3 workflow across practical open-weights deployments.
-
2026-04-15-AI-Digest — Gemma 4 continues to be referenced alongside Qwen 3.5, Llama 5, and imminent DeepSeek V4 in r/LocalLLaMA’s open-weights stack; Stanford AI Index places Chinese labs within 1.70% of top-US-model performance, putting Gemma 4’s Apache 2.0 positioning in a more competitive open-weights field.
Context
Gemma 4 enters a six-way open-weight competition between Google, Alibaba (Qwen), Meta (Llama), Mistral, OpenAI (gpt-oss-120b), and Zhipu AI (GLM). See also: Qwen, Gemini, MOC - Open Source Models.