view article Article Efficient LLM Pretraining: Packed Sequences and Masked Attention Oct 7, 2024 • 63
ModernBERT Collection Bringing BERT into modernity via both architecture changes and scaling • 3 items • Updated Dec 19, 2024 • 151