Ministral-3-14B-Instruct-2512-TextOnly
This model is the text-only component extracted from the Vision-Language Model mistralai/Ministral-3-14B-Instruct-2512.
Usage
You can load this model using AutoModelForCausalLM as shown below:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Aratako/Ministral-3-14B-Instruct-2512-TextOnly"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
)
messages = [
{
"role": "user",
"content": "Tell me a joke about computers.",
},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
output = model.generate(
input_ids, max_new_tokens=512, pad_token_id=tokenizer.eos_token_id
)
decoded_output = tokenizer.decode(
output[0][len(input_ids[0]) :], skip_special_tokens=True
)
print(decoded_output)
Original Model Information
This is a weight extraction of the original VLM. For detailed benchmarks, licensing details, and architectural information, please refer to the original model card: mistralai/Ministral-3-14B-Instruct-2512
- Downloads last month
- 65
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Aratako/Ministral-3-14B-Instruct-2512-TextOnly
Base model
mistralai/Ministral-3-14B-Base-2512
Quantized
mistralai/Ministral-3-14B-Instruct-2512