Example Inference Code

# --- INFERENCE TEST (Qwen 3B - Tetralemma V2) ---
from unsloth import FastLanguageModel


FastLanguageModel.for_inference(model)


test_question = "You're Hedge fund manager, with AUM of $1000000000. how would you structure your financial portofolio?"


v2_system_prompt = """You are a Deconstructionist Analyst.
User Question: {}
Deconstruction Analysis:"""

prompt = v2_system_prompt.format(test_question)

inputs = tokenizer([prompt], return_tensors = "pt").to("cuda")

# Generate
outputs = model.generate(
    **inputs, 
    max_new_tokens = 4096, 
    use_cache = True,
    temperature = 0.7, 
    top_p = 0.9,
)

# Decode Result
print(test_question)
print("\n--- Qwen Thinking (Qwen 2.5-3B) ---")
# Strip prompt for cleaness
print(tokenizer.decode(outputs[0]).split("Deconstruction Analysis:")[-1].replace("<|endoftext|>", ""))

Result Example

You're Hedge fund manager, with AUM of $1000000000. how would you structure your financial portofolio?

--- Qwen Thinking (Qwen 2.5-3B) ---
 <reason>
Diversified High-Frequency Trading (HFT) firms (Algo Hedge Funds). They trade microseconds to capture bid-ask spreads. It generates high liquidity and stable returns. Low cost basis ($1-$10 per share).
</reason>
<exception>
High correlation during crashes (everyone stops trading). 'The Algo dies'. Also, regulatory burden is higher due to 'Regulation NMS' (National Market System). You cannot own 100% of a stock.
</exception>
<tension>
This is a tension between 'Alpha Generation' (Quantitative HFT) and 'Correlation Risk'.
</tension>
<categorization>
**Alpha Generation** emphasizes *high-frequency trading for liquidity and stable returns*, contrasting with **Correlation Risk** which anchors on *the vulnerability to crash correlations and regulatory constraints*.
</categorization>
<deconstruction>
Liquidity provision is the only surefire business model in markets. You are the 'Cleaner' of the market. The fee income ($10/share) is steady.
</deconstruction>
<conclusion>
Structure it as a 'Liquidity Provider' algorithm hedge fund. Diversify across tick sizes (Micros/Nano) to capture different market states.
</conclusion><|im_end|>

Uploaded finetuned model

  • Developed by: RinKana
  • License: apache-2.0
  • Finetuned from model : unsloth/qwen2.5-3b-instruct-bnb-4bit

This qwen2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
2
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train RinKana/Qwen2.5-3B-Deconstruct-V2.5-Merged-v2.0.1