Akylbek Maxutov PRO

akylbekmaxutov

AI & ML interests

None yet

Recent Activity

reacted to mayafree's post with 🔥 about 1 hour ago
Leaderboard of Leaderboards — A Real-Time Meta-Ranking of AI Benchmarks https://huggingface.co/spaces/MAYA-AI/all-leaderboard Hundreds of AI leaderboards exist on HuggingFace. Knowing which ones the community actually trusts has never been easy — until now. Leaderboard of Leaderboards (LoL) ranks the leaderboards themselves, using live HuggingFace trending scores and cumulative likes as the signal. No editorial curation. No manual selection. Just what the global AI research community is actually visiting and endorsing, surfaced in real time. Sort by trending to see what is capturing attention right now, or by likes to see what has built lasting credibility over time. Nine domain filters let you zero in on what matters most to your work, and every entry shows both its rank within this collection and its real-time global rank across all HuggingFace Spaces. The collection spans well-established standards like Open LLM Leaderboard, Chatbot Arena, MTEB, and BigCodeBench alongside frameworks worth watching. FINAL Bench targets AGI-level evaluation across 100 tasks in 15 domains and recently reached the global top 5 in HuggingFace dataset rankings. Smol AI WorldCup runs tournament-format competitions for sub-8B models scored via FINAL Bench criteria. ALL Bench aggregates results across frameworks into a unified ranking that resists the overfitting risks of any single standard. The deeper purpose is not convenience. It is transparency. How we measure AI matters as much as the AI we measure.
reacted to SeaWolf-AI's post with 🔥 2 days ago
🏟️ Smol AI WorldCup: A 4B Model Just Beat 8B — Here's the Data We evaluated 18 small language models from 12 makers on 125 questions across 7 languages. The results challenge the assumption that bigger is always better. Community Article: https://huggingface.co/blog/FINAL-Bench/smol-worldcup Live Leaderboard: https://huggingface.co/spaces/ginigen-ai/smol-worldcup Dataset: https://huggingface.co/datasets/ginigen-ai/smol-worldcup What we found: → Gemma-3n-E4B (4B, 2GB RAM) outscores Qwen3-8B (8B, 5.5GB). Doubling parameters gained only 0.4 points. RAM cost: 2.75x more. → GPT-OSS-20B fits in 1.5GB yet matches Champions-league dense models requiring 8.5GB. MoE architecture is the edge AI game-changer. → Thinking models hurt structured output. DeepSeek-R1-7B scores 8.7 points below same-size Qwen3-8B and runs 2.7x slower. → A 1.3B model fabricates confident fake content 80% of the time when prompted with nonexistent entities. Qwen3 family hits 100% trap detection across all sizes. → Qwen3-1.7B (1.2GB) outscores Mistral-7B, Llama-3.1-8B, and DeepSeek-R1-14B. Latest architecture at 1.7B beats older architecture at 14B. What makes this benchmark different? Most benchmarks ask "how smart?" — we measure five axes simultaneously: Size, Honesty, Intelligence, Fast, Thrift (SHIFT). Our ranking metric WCS = sqrt(SHIFT x PIR_norm) rewards models that are both high-quality AND efficient. Smart but massive? Low rank. Tiny but poor? Also low. Top 5 by WCS: 1. GPT-OSS-20B — WCS 82.6 — 1.5GB — Raspberry Pi tier 2. Gemma-3n-E4B — WCS 81.8 — 2.0GB — Smartphone tier 3. Llama-4-Scout — WCS 79.3 — 240 tok/s — Fastest model 4. Qwen3-4B — WCS 76.6 — 2.8GB — Smartphone tier 5. Qwen3-1.7B — WCS 76.1 — 1.2GB — IoT tier Built in collaboration with the FINAL Bench research team. Interoperable with ALL Bench Leaderboard for full small-to-large model comparison. Dataset is open under Apache 2.0 (125 questions, 7 languages). We welcome new model submissions.
View all activity

Organizations

Institute of Smart Systems and Artificial Intelligence, Nazarbayev University's profile picture Hugging Face Discord Community's profile picture gumano1d's profile picture ISSAI-OylanEvalKit's profile picture ISSAI-mllm-safe-project's profile picture ISSAI-llm's profile picture ISSAI-qolda-websearch's profile picture ISSAI-data-contamination's profile picture