MixtureVitae: Open Web-Scale Pretraining Dataset With High Quality Instruction and Reasoning Data Built from Permissive-First Text Sources Paper • 2509.25531 • Published Sep 29, 2025 • 9
Balancing Speed and Stability: The Trade-offs of FP8 vs. BF16 Training in LLMs Paper • 2411.08719 • Published Nov 10, 2024 • 1
Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks Paper • 2508.18672 • Published Aug 26, 2025 • 10
Rewriting Pre-Training Data Boosts LLM Performance in Math and Code Paper • 2505.02881 • Published May 5, 2025 • 4
Rewriting Pre-Training Data Boosts LLM Performance in Math and Code Paper • 2505.02881 • Published May 5, 2025 • 4
Building Instruction-Tuning Datasets from Human-Written Instructions with Open-Weight Large Language Models Paper • 2503.23714 • Published Mar 31, 2025 • 1
Balancing Speed and Stability: The Trade-offs of FP8 vs. BF16 Training in LLMs Paper • 2411.08719 • Published Nov 10, 2024 • 1
Why We Build Local Large Language Models: An Observational Analysis from 35 Japanese and Multilingual LLMs Paper • 2412.14471 • Published Dec 19, 2024
Wider or Deeper? Scaling LLM Inference-Time Compute with Adaptive Branching Tree Search Paper • 2503.04412 • Published Mar 6, 2025 • 5
Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization Paper • 2502.19261 • Published Feb 26, 2025 • 6