Post
1497
π Gemma 4 Playground β Dual Model Demo on ZeroGPU
We just launched a Gemma 4 Playground that lets you chat with Google DeepMind's latest open models β directly on Hugging Face Spaces with ZeroGPU.
FINAL-Bench/Gemma-4-Multi
π Try it now: FINAL-Bench/Gemma-4-Multi
Two Models, One Space
Switch between both Gemma 4 variants in a single interface:
β‘ Gemma 4 26B-A4B β MoE with 128 experts, only 3.8B active params. 95% of the 31B's quality at ~8x faster inference. AIME 88.3%, GPQA 82.3%.
π Gemma 4 31B β Dense 30.7B. Best quality among Gemma 4 family. AIME 89.2%, GPQA 84.3%, Codeforces 2150. Arena open-model top 3.
Features
Vision β Upload images for analysis, OCR, chart reading, document parsing
Thinking Mode β Toggle chain-of-thought reasoning with Gemma 4's native <|channel> thinking tokens
System Prompts β 6 presets (General, Code, Math, Creative, Translate, Research) or write your own
Streaming β Real-time token-by-token response via ZeroGPU
Apache 2.0 β Fully open, no restrictions
Technical Details
Built with the dev build of transformers (5.5.0.dev0) for full Gemma 4 support including multimodal apply_chat_template, variable-resolution image processing, and native thinking mode. Runs on HF ZeroGPU with @spaces .GPU β no dedicated GPU needed.
Both models support 256K context window and 140+ languages out of the box.
Links
- π€ Space: [FINAL-Bench/Gemma-4-Multi]( FINAL-Bench/Gemma-4-Multi)
- π Gemma 4 26B-A4B: [google/gemma-4-26B-A4B-it]( google/gemma-4-26B-A4B-it)
- π Gemma 4 31B: [google/gemma-4-31B-it]( google/gemma-4-31B-it)
- π¬ DeepMind Blog: [Gemma 4 Launch](https://deepmind.google/blog/gemma-4-byte-for-byte-the-most-capable-open-models/)
We just launched a Gemma 4 Playground that lets you chat with Google DeepMind's latest open models β directly on Hugging Face Spaces with ZeroGPU.
FINAL-Bench/Gemma-4-Multi
π Try it now: FINAL-Bench/Gemma-4-Multi
Two Models, One Space
Switch between both Gemma 4 variants in a single interface:
β‘ Gemma 4 26B-A4B β MoE with 128 experts, only 3.8B active params. 95% of the 31B's quality at ~8x faster inference. AIME 88.3%, GPQA 82.3%.
π Gemma 4 31B β Dense 30.7B. Best quality among Gemma 4 family. AIME 89.2%, GPQA 84.3%, Codeforces 2150. Arena open-model top 3.
Features
Vision β Upload images for analysis, OCR, chart reading, document parsing
Thinking Mode β Toggle chain-of-thought reasoning with Gemma 4's native <|channel> thinking tokens
System Prompts β 6 presets (General, Code, Math, Creative, Translate, Research) or write your own
Streaming β Real-time token-by-token response via ZeroGPU
Apache 2.0 β Fully open, no restrictions
Technical Details
Built with the dev build of transformers (5.5.0.dev0) for full Gemma 4 support including multimodal apply_chat_template, variable-resolution image processing, and native thinking mode. Runs on HF ZeroGPU with @spaces .GPU β no dedicated GPU needed.
Both models support 256K context window and 140+ languages out of the box.
Links
- π€ Space: [FINAL-Bench/Gemma-4-Multi]( FINAL-Bench/Gemma-4-Multi)
- π Gemma 4 26B-A4B: [google/gemma-4-26B-A4B-it]( google/gemma-4-26B-A4B-it)
- π Gemma 4 31B: [google/gemma-4-31B-it]( google/gemma-4-31B-it)
- π¬ DeepMind Blog: [Gemma 4 Launch](https://deepmind.google/blog/gemma-4-byte-for-byte-the-most-capable-open-models/)