Update README.md

#5
by sergiopaniego HF Staff - opened
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -134,7 +134,7 @@ We recommend using this model with [vLLM](https://github.com/vllm-project/vllm).
134
 
135
  #### Installation
136
 
137
- Make sure to install most recent vllm:
138
 
139
  ```
140
  uv pip install -U vllm \
@@ -176,7 +176,7 @@ Additional flags:
176
 
177
  #### Usage of the model
178
 
179
- Here we asumme that the model `mistralai/Ministral-3-3B-Instruct-2512` is served and you can ping it to the domain `localhost` with the port `8000` which is the default for vLLM.
180
 
181
  <details>
182
  <summary>Vision Reasoning</summary>
@@ -509,7 +509,7 @@ print(decoded_output)
509
 
510
  **Note:**
511
 
512
- Transformers allows you to automatically convert the checkpoint to Bfloat16. To so simple load the model as follows:
513
 
514
  ```py
515
  from transformers import Mistral3ForConditionalGeneration, FineGrainedFP8Config
 
134
 
135
  #### Installation
136
 
137
+ Make sure to install the most recent vllm:
138
 
139
  ```
140
  uv pip install -U vllm \
 
176
 
177
  #### Usage of the model
178
 
179
+ Here we assume that the model `mistralai/Ministral-3-3B-Instruct-2512` is served and you can ping it to the domain `localhost` with the port `8000` which is the default for vLLM.
180
 
181
  <details>
182
  <summary>Vision Reasoning</summary>
 
509
 
510
  **Note:**
511
 
512
+ Transformers allows you to automatically convert the checkpoint to Bfloat16. To do so, simply load the model as follows:
513
 
514
  ```py
515
  from transformers import Mistral3ForConditionalGeneration, FineGrainedFP8Config