mitkox/Starling-LM-7B-beta-RLAIF-4bit-MLX
This model was converted to MLX format from Nexusflow/Starling-LM-7B-beta.
Refer to the original model card for more details on the model.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("mitkox/Starling-LM-7B-beta-RLAIF-4bit-MLX")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 4
Model size
1B params
Tensor type
F16
·
U32
·
Hardware compatibility
Log In
to add your hardware
Quantized