FINAL_Bench

Team
community
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

SeaWolf-AIย  updated a Space about 9 hours ago
FINAL-Bench/Gemma-4-Multi
SeaWolf-AIย  new activity about 20 hours ago
FINAL-Bench/Darwin-35B-A3B-Opus:I get error
SeaWolf-AIย  updated a Space about 20 hours ago
FINAL-Bench/Darwin-35B-A3B-Opus
View all activity

Articles

SeaWolf-AIย 
posted an update about 7 hours ago
view post
Post
423
๐Ÿ’Ž Gemma 4 Playground โ€” Dual Model Demo on ZeroGPU

We just launched a Gemma 4 Playground that lets you chat with Google DeepMind's latest open models โ€” directly on Hugging Face Spaces with ZeroGPU.

FINAL-Bench/Gemma-4-Multi

๐Ÿ‘‰ Try it now: FINAL-Bench/Gemma-4-Multi
Two Models, One Space
Switch between both Gemma 4 variants in a single interface:

โšก Gemma 4 26B-A4B โ€” MoE with 128 experts, only 3.8B active params. 95% of the 31B's quality at ~8x faster inference. AIME 88.3%, GPQA 82.3%.
๐Ÿ† Gemma 4 31B โ€” Dense 30.7B. Best quality among Gemma 4 family. AIME 89.2%, GPQA 84.3%, Codeforces 2150. Arena open-model top 3.

Features

Vision โ€” Upload images for analysis, OCR, chart reading, document parsing
Thinking Mode โ€” Toggle chain-of-thought reasoning with Gemma 4's native <|channel> thinking tokens
System Prompts โ€” 6 presets (General, Code, Math, Creative, Translate, Research) or write your own
Streaming โ€” Real-time token-by-token response via ZeroGPU
Apache 2.0 โ€” Fully open, no restrictions

Technical Details
Built with the dev build of transformers (5.5.0.dev0) for full Gemma 4 support including multimodal apply_chat_template, variable-resolution image processing, and native thinking mode. Runs on HF ZeroGPU with @spaces .GPU โ€” no dedicated GPU needed.
Both models support 256K context window and 140+ languages out of the box.

Links

- ๐Ÿค— Space: [FINAL-Bench/Gemma-4-Multi]( FINAL-Bench/Gemma-4-Multi)
- ๐Ÿ“„ Gemma 4 26B-A4B: [google/gemma-4-26B-A4B-it]( google/gemma-4-26B-A4B-it)
- ๐Ÿ“„ Gemma 4 31B: [google/gemma-4-31B-it]( google/gemma-4-31B-it)
- ๐Ÿ”ฌ DeepMind Blog: [Gemma 4 Launch](https://deepmind.google/blog/gemma-4-byte-for-byte-the-most-capable-open-models/)
SeaWolf-AIย 
in FINAL-Bench/Darwin-35B-A3B-Opus about 20 hours ago

I get error

๐Ÿ‘ 1
5
#1 opened about 22 hours ago by
Yuma42
SeaWolf-AIย 
in FINAL-Bench/Darwin-35B-A3B-Opus about 20 hours ago

gguf version

๐Ÿ”ฅ 1
1
#1 opened 2 days ago by
eramax
SeaWolf-AIย 
updated a Space 2 days ago
SeaWolf-AIย 
published a Space 2 days ago
SeaWolf-AIย 
posted an update 3 days ago
view post
Post
2048
๐Ÿงฌ Darwin-35B-A3B-Opus โ€” The Child That Surpassed Both Parents

What if a merged model could beat both its parents? We proved it can.
Darwin-35B-A3B-Opus is a 35B MoE model (3B active) built with our Darwin V5 engine โ€” the first evolution system that CT-scans parent models before merging them.
๐Ÿค— Model: FINAL-Bench/Darwin-35B-A3B-Opus

The result speaks for itself: GPQA Diamond 90.0%, versus Father (Qwen3.5-35B-A3B) at 84.2% and Mother (Claude 4.6 Opus Distilled) at 85.0%. That's +6.9% over Father and +5.9% over Mother. Not a tradeoff โ€” a genuine leap. Meanwhile, MMMLU sits at 85.0% (Father: 85.2%), multimodal is fully intact, and all 201 languages are preserved.

How? Model MRI changed everything. Traditional merging is guesswork. Darwin V4 added evolution. Darwin V5 added X-ray vision. Model MRI scans each parent layer by layer and discovers: Mother's L34โ€“L38 is the reasoning engine (peak cosine distance), 50โ€“65% of Mother's experts are dead (killed by text-only distillation), and Father is a healthy generalist with every expert alive. The prescription: transplant Mother's reasoning brain at L38 (90% weight), replace her dead experts with Father's living ones, and let Father's router handle the output layer. Reasoning went up. Versatility stayed intact. No tradeoff โ€” just evolution.

35B total, 3B active (MoE) ยท GPQA Diamond 90.0% ยท MMMLU 85.0% (201 languages) ยท Multimodal Image & Video ยท 262K native context ยท 147.8 tok/s on H100 ยท Runs on a single RTX 4090 (Q4) ยท Apache 2.0
Darwin V5's full algorithm and technical details will be released alongside an upcoming paper.

๐Ÿš€ Live Demo: FINAL-Bench/Darwin-35B-A3B-Opus

๐Ÿ† FINAL Bench Leaderboard: FINAL-Bench/Leaderboard

๐Ÿ“Š ALL Bench Leaderboard: FINAL-Bench/all-bench-leaderboard

Built by VIDRAFT ยท Supported by the Korean Government GPU Support Program
  • 8 replies
ยท
SeaWolf-AIย 
published an article 3 days ago
view article
Article

"The Child That Surpassed Both Parents Through MRI-Guided Evolutionary Merge"

โ€ข
13
SeaWolf-AIย 
posted an update 4 days ago
view post
Post
4607
๐ŸŒ World Model Bench โ€” does your world model actually think?

FID measures realism. FVD measures smoothness. But neither tells you whether the model understood the scene.

We just released WM Bench โ€” the first benchmark for cognitive intelligence in world models. The core question: when a beast charges from 3 meters away, does the model know to sprint โ€” not walk? Does it respond differently to a human vs an animal? Does it remember the left corridor was blocked two steps ago?

Those are cognitive questions. No existing benchmark asks them. So we built one.

3 Pillars ยท 10 Categories ยท 100 Scenarios ยท 1,000-point scale

- ๐Ÿ‘ P1 Perception (25%) โ€” Can it read the scene?
- ๐Ÿง  P2 Cognition (45%) โ€” Does it predict threats, escalate emotions, utilize memory?
- ๐Ÿ”ฅ P3 Embodiment (30%) โ€” Does the body respond with the right motion?

All evaluation is via simple JSON I/O โ€” no 3D engine, no special hardware. Any model with an API can participate.

We also built PROMETHEUS as a live reference implementation โ€” runs in your browser on a T4, no install needed. Combines FloodDiffusion motion generation with a LLM cognitive brain (Perceive โ†’ Predict โ†’ Decide โ†’ Act). Scored 726/1000 (Grade B) on Track C โ€” the only directly verified model so far. Submissions from other teams very welcome.

---

๐Ÿ—‚ Dataset โ†’ FINAL-Bench/World-Model
๐ŸŒ Demo โ†’ FINAL-Bench/World-Model
๐Ÿ† Leaderboard โ†’ FINAL-Bench/worldmodel-bench
๐Ÿ“ Article โ†’ https://huggingface.co/blog/FINAL-Bench/world-model

Part of the FINAL Bench Family โ€” alongside FINAL Bench (Feb 2026). Feedback on rubrics and missing models always welcome!
SeaWolf-AIย 
published an article 4 days ago
view article
Article

Introducing WM Bench: A Benchmark for Cognitive Intelligence in World Models

โ€ข
13