It’s time to evaluate your model and submit it to the leaderboard! This unit will use the same leaderboard-based submission format as Unit 1. Here’s the plan:
hf jobs on 5 benchmarksOn this page we will go through each step.
For Unit 2’s submission, you should read all the materials in the unit and choose a model to evaluate. You can use a model you trained in Unit 1, or select any other model that interests you. While we won’t ask you to submit scores from custom tasks for this assignment, you can re-use those skills to evaluate models in the future.
The evaluation will test the model’s performance on standard benchmarks to assess its general capabilities.
Now, we will evaluate your chosen model using hf jobs combined with lighteval and vLLM. We will run evaluation on 5 different benchmarks and push the results to the Hugging Face Hub.
hf jobs uv run \
--flavor a10g-large \
--with "lighteval[vllm]" \
--secrets HF_TOKEN \
lighteval vllm "model_name=<your-username>/<your-model-name>" "leaderboard|mmlu:abstract_algebra|5|1,leaderboard|mmlu:college_biology|5|1,leaderboard|mmlu:college_computer_science|5|1,leaderboard|truthfulqa:mc|0|0,leaderboard|gsm8k|5|1" --push-to-hub --results-org <your-username>This command will:
hf jobs to run the evaluation on cloud infrastructureMake sure to replace
<your-username>with your actual Hugging Face username and<your-model-name>with the name of the model you want to evaluate.
You are now ready to submit your model evaluation to the leaderboard! You need to do two things:
submissions.jsonhf jobs) in the PR textOpen a pull request on the leaderboard space to submit your model evaluation. You just need to add your model info and reference to the dataset you created in the previous step.
{
"submissions": [
... // existing submissions
{
"username": "<your-username>",
"model_name": "<your-model-name>",
"chapter": "2",
"submission_date": "<your-submission-date>",
"results-dataset": "<your-results-dataset>"
}
]
}Within the PR text, share your evaluation command. For example:
hf jobs uv run \
--flavor a10g-large \
--with "lighteval[vllm]" \
--secrets HF_TOKEN \
lighteval vllm "model_name=myusername/my-model" "leaderboard|gsm8k|5|1" --push-to-hub --results-org myusernameThis will help us reproduce your model evaluation before we add it to the leaderboard.
Once the PR is merged, your model will be added to the leaderboard! You can check the leaderboard here.
By completing this unit, you’ve learned how to: