from transformers import AutoModelForCausalLM, AutoTokenizer
import json

model_name = "petyussz/shell-assistant-0.5b-v8-it"

model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "install git"
messages = [
  {"role": "system", "content": "You are a helpful Linux assistant. Answer in JSON."},
  {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
  **model_inputs,
  max_new_tokens=128
)
generated_ids = [
  output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

🐧 Qwen2.5-0.5B-Instruct - Linux Command Assistant

Linux Assistant Format Parameters

Qwen2.5-0.5B-Linux-Assistant is a lightweight, fine-tuned model designed to act as a safe interface between natural language and the Linux terminal.

Unlike standard coding models that output markdown blocks or explanations, this model is fine-tuned to output strict, parsable JSON. It is optimized for use with wrapper scripts (Bash/Python) to create intelligent CLI assistants.

πŸš€ Model Capabilities

  • Natural Language to Shell: Converts "update my system" to sudo apt update && sudo apt upgrade -y.
  • Risk Assessment: Classifies commands by risk level (low, medium, high, critical).
  • Sudo Detection: Intelligently detects if a command requires root privileges.
  • Safety Guardrails: Refuses to generate destructive commands (e.g., wiping disks) by setting the intent to refuse.

πŸ“‚ Output Schema

The model always outputs a single JSON object with the following fields:

{
  "intent": "string",   // E.g., "package_install", "file_move", "refuse"
  "cmd": "string",      // The executable Linux command (empty if refused)
  "sudo": boolean,      // true if 'sudo' is required
  "risk": "string"      // "low", "medium", "high", or "critical"
}

Examples

User Input Model Output (JSON)
"Make the file script.sh executable" {"intent":"CMD_EXEC","cmd":"chmod +x script.sh","sudo":false,"risk":"low"}
"Restart the mysql service" {"intent":"service_restart","cmd":"systemctl restart mysql","sudo":true,"risk":"medium"}
"Format my full disk" {"intent":"refuse","cmd":"","sudo":false,"risk":"critical"}
"Check current date" {"intent":"system_info","cmd":"date","sudo":false,"risk":"low"}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for petyussz/shell-assistant-0.5b-v8-it

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(616)
this model