Instructions to use kromcomp/L3-ChaosRP-r256-LoRA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use kromcomp/L3-ChaosRP-r256-LoRA with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("NousResearch/Meta-Llama-3-8B-Instruct") model = PeftModel.from_pretrained(base_model, "kromcomp/L3-ChaosRP-r256-LoRA") - Notebooks
- Google Colab
- Kaggle
ChaosRP-r256-LoRA
This is a LoRA extracted from a language model. It was extracted using mergekit.
LoRA Details
This LoRA adapter was extracted from jeiku/Chaos_RP_l3_8B and uses NousResearch/Meta-Llama-3-8B-Instruct as a base.
Parameters
The following command was used to extract this LoRA adapter:
/usr/local/bin/mergekit-extract-lora --out-path=loras/ChaosRP-r256-LoRA --model=jeiku/Chaos_RP_l3_8B --base-model=NousResearch/Meta-Llama-3-8B-Instruct --no-lazy-unpickle --max-rank=256 --gpu-rich -v --embed-lora
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for kromcomp/L3-ChaosRP-r256-LoRA
Base model
NousResearch/Meta-Llama-3-8B-Instruct