(important warning) If you are using the model in GGUF mode, you need to configure the prompt template. Here is the recommended prompt template:

FROM (YOUR MODEL NAME)

TEMPLATE """### Instruction: {{ .Prompt }}

Response: """

PARAMETER stop "### Instruction:" PARAMETER stop "### Response:"

Thanks for mradermacher for converting the model to GGUF format.

Uploaded finetuned model

  • Developed by: Ali-Yaser
  • License: llama 3.1
  • Finetuned from model : unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit

İnfo

This model, Llama 3.1 8b, has been fine-tuned using a 1M token dataset, and the model version is v0.2. The model is newer and MAY PROVIDE INCORRECT RESPONSES.

license llama 3.1 commuity license

Fine-tuned by Ali-Yaser

Downloads last month
31
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Ali-Yaser/LLaMA-3.1-turkis-8b

Finetuned
(2049)
this model
Quantizations
2 models

Dataset used to train Ali-Yaser/LLaMA-3.1-turkis-8b