AI & ML interests

Aligning LLMs to be helpful, honest, harmless, and huggy (H4)

Recent Activity

sergiopaniego 
posted an update about 9 hours ago
sergiopaniego 
posted an update 4 days ago
view post
Post
2700
Want to get started with fine-tuning but don’t know where to begin? 🤓☝️

We’re expanding our collection of beginner-friendly free Colab notebooks so you can learn and fine-tune models using TRL at no cost

🔬 Check out the full list of free notebooks: https://huggingface.co/docs/trl/main/en/example_overview#notebooks

🔬 If you want more advanced content, we also have a lot to cover in the community tutorials: https://huggingface.co/docs/trl/community_tutorials

And now the obvious question: what would you like us to add next?
sergiopaniego 
posted an update 6 days ago
view post
Post
2289
NEW: @mistralai released a fantastic family of multimodal models, Ministral 3.

You can fine-tune them for free on Colab using TRL ⚡️, supporting both SFT and GRPO

Link to the notebooks:
- SFT: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft_ministral3_vl.ipynb
- GRPO: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo_ministral3_vl.ipynb
- TRL and more examples: https://huggingface.co/docs/trl/index
  • 2 replies
·
sergiopaniego 
posted an update 7 days ago
sergiopaniego 
posted an update 8 days ago
view post
Post
3066
want to use open models easily through an API?

Inference Providers might be exactly what you’re looking for sooo here’s a complete beginner-friendly walkthrough 🧐

https://www.youtube.com/watch?v=oxwsizy1Spw
  • 2 replies
·
sergiopaniego 
posted an update 12 days ago
view post
Post
1712
nanochat is now in transformers!

The LLM by @karpathy is officially in the library, and we wrote a blog covering: how did we port the model, differences from the original, and how to run or train it.

go read it 🤓

nanochat-students/transformers
sergiopaniego 
posted an update 14 days ago
sergiopaniego 
posted an update 15 days ago
sergiopaniego 
posted an update 19 days ago
sergiopaniego 
posted an update 20 days ago
view post
Post
2573
we've just added several example scripts to TRL showing how to train models with GRPO using some of the new OpenEnv environments

train a model to interact with a browser (🎮 BrowserGym Env), play Wordle (🎮 Wordle Env) and moooore!

TRL (GRPO + vLLM) + OpenEnv! ⚡️

📝 go play with them: https://github.com/huggingface/trl/tree/main/examples/scripts/openenv

📝 examples list: https://huggingface.co/docs/trl/main/en/example_overview#scripts
sergiopaniego 
posted an update 22 days ago
sergiopaniego 
posted an update about 1 month ago
abidlabs 
posted an update about 1 month ago
view post
Post
8317
Why I think local, open-source models will eventually win.

The most useful AI applications are moving toward multi-turn agentic behavior: systems that take hundreds or even thousands of iterative steps to complete a task, e.g. Claude Code, computer-control agents that click, type, and test repeatedly.

In these cases, the power of the model is not how smart it is per token, but in how quickly it can interact with its environment and tools across many steps. In that regime, model quality becomes secondary to latency.

An open-source model that can call tools quickly, check that the right thing was clicked, or verify that a code change actually passes tests can easily outperform a slightly “smarter” closed model that has to make remote API calls for every move.

Eventually, the balance tips: it becomes impractical for an agent to rely on remote inference for every micro-action. Just as no one would tolerate a keyboard that required a network request per keystroke, users won’t accept agent workflows bottlenecked by latency. All devices will ship with local, open-source models that are “good enough” and the expectation will shift toward everything running locally. It’ll happen sooner than most people think.
·