(important warning) If you are using the model in GGUF mode, you need to configure the prompt template. Here is the recommended prompt template:
FROM (YOUR MODEL NAME)
TEMPLATE """### Instruction: {{ .Prompt }}
Response: """
PARAMETER stop "### Instruction:" PARAMETER stop "### Response:"
- GGUF MODELS https://huggingface.co/mradermacher/LLaMA-3.1-turkis-8b-GGUF; https://huggingface.co/mradermacher/LLaMA-3.1-turkis-8b-i1-GGUF
Thanks for mradermacher for converting the model to GGUF format.
Uploaded finetuned model
- Developed by: Ali-Yaser
- License: llama 3.1
- Finetuned from model : unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
İnfo
This model, Llama 3.1 8b, has been fine-tuned using a 1M token dataset, and the model version is v0.2. The model is newer and MAY PROVIDE INCORRECT RESPONSES.
license llama 3.1 commuity license
Fine-tuned by Ali-Yaser
- Downloads last month
- 31
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support

