YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Built with Axolotl

See axolotl config

axolotl version: 0.9.1.post1

base_model: meta-llama/Llama-3.1-8B-Instruct
model_type: LlamaForCausalLM
tokenizer_type: AutoTokenizer
gradient_accumulation_steps: 2
micro_batch_size: 8
num_epochs: 4
learning_rate: 0.0001
optimizer: adamw_torch_fused
lr_scheduler: cosine
load_in_8bit: false
load_in_4bit: false
adapter: lora
lora_r: 8
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
datasets:
- path: /workspace/FinLoRA/data/train/headline_train.jsonl
  type:
    field_instruction: context
    field_output: target
    format: '[INST] {instruction} [/INST]'
    no_input_format: '[INST] {instruction} [/INST]'
val_set_size: 0.02
output_dir: /workspace/FinLoRA/lora/axolotl-output/headline_llama_3_1_8b_fp16_r8
sequence_len: 4096
gradient_checkpointing: true
logging_steps: 500
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
weight_decay: 0.0
special_tokens:
  pad_token: <|end_of_text|>
deepspeed: deepspeed_configs/zero1.json
bf16: auto
tf32: false
chat_template: llama3
wandb_name: headline_llama_3_1_8b_fp16_r8

workspace/FinLoRA/lora/axolotl-output/headline_llama_3_1_8b_fp16_r8

This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the /workspace/FinLoRA/data/train/headline_train.jsonl dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0465

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 3
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 48
  • total_eval_batch_size: 24
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • num_epochs: 4.0

Training results

Training Loss Epoch Step Validation Loss
No log 0.0006 1 6.7673
No log 0.2504 420 0.0517
0.2063 0.5007 840 0.0496
0.0491 0.7511 1260 0.0389
0.0424 1.0012 1680 0.0421
0.0363 1.2516 2100 0.0386
0.035 1.5019 2520 0.0441
0.035 1.7523 2940 0.0452
0.0321 2.0024 3360 0.0408
0.031 2.2528 3780 0.0442
0.0249 2.5031 4200 0.0414
0.0263 2.7535 4620 0.0404
0.0245 3.0036 5040 0.0420
0.0245 3.2539 5460 0.0484
0.0183 3.5043 5880 0.0483
0.0164 3.7547 6300 0.0465

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.8.0.dev20250319+cu128
  • Datasets 3.5.1
  • Tokenizers 0.21.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support