Qwen3-14B Chemical Synthesis Classifier - LoRA Adapter

Model Overview

This repository contains the LoRA adapter for a Qwen3-14B model fine-tuned to classify chemical synthesizability (P = synthesizable, N = unsynthesizable). Training uses a P/N-only focal loss. Prompts follow this template:

You are a materials science assistant. Given a chemical composition, answer only with 'P' (synthesizable/positive) or 'N' (non-synthesizable/negative)." Correspondingly, each user query was formatted as: "Is the material {composition} likely synthesizable? Answer with P (positive) or N (negative).

The base checkpoint is Unsloth’s 4-bit MXFP4 build (unsloth/Qwen3-14B-unsloth-bnb-4bit). Attaching this adapter reproduces the best validation performance among evaluated epochs (Epoch 2).

  • Task: Binary classification (P = synthesizable, U = unsynthesizable)
  • Training Objective: QLoRA with focal loss (gamma = 2.0, alpha_P = 7.5, alpha_N = 1.0)
  • Max Sequence Length (train): 2048 tokens; Evaluation: 180 tokens
  • Dataset: train (train_llm_pn.jsonl) / validation (train_llm_pn.jsonl)

Validation Metrics (Epoch 3 - Best)

Metric Value
TPR (P Recall) 0.9595
TNR (U Specificity) 0.9622

Dataset Sources

The training and validation splits combine multiple public sources and internal curation:

  • P/U labelled data from J. Am. Chem. Soc. 2024, 146, 29, 19654-19659 (doi:10.1021/jacs.4c05840).
  • High-entropy materials data from Data in Brief 2018, 21, 2664-2678 (doi:10.1016/j.dib.2018.11.111).
  • Additional candidates via literature queries and manual screening of high-entropy materials.

VRAM & System Requirements

  • GPU VRAM: >=16 GB recommended for loading the 4-bit base with this adapter.
  • RAM: >=16 GB recommended for tokenization and batching.
  • Libraries: unsloth, transformers, peft, bitsandbytes.
  • CPU-only inference is not supported with MXFP4 4-bit weights.

Limitations & Notes

  • This adapter targets chemical synthesizability judgments; generalization outside this domain is not guaranteed.
  • For consistent results, use a chat template aligned with training (a chat_template.jinja is included in this checkpoint).
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for evenfarther/Qwen3-14b-chemical-synthesis-PN-adapter

Finetuned
Qwen/Qwen3-14B
Finetuned
(110)
this model

Collection including evenfarther/Qwen3-14b-chemical-synthesis-PN-adapter