helsinki_new_ver5.2 / README.md
Curiousfox's picture
End of training
a979af0 verified
metadata
library_name: transformers
language:
  - nan
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ZH
tags:
  - generated_from_trainer
datasets:
  - sarahwei/Taiwanese-Minnan-Sutiau
metrics:
  - bleu
model-index:
  - name: helsinki_new_ver5.2
    results: []

helsinki_new_ver5.2

This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-ZH on the sarahwei/Taiwanese-Minnan-Sutiau dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2017
  • Bleu: 2.0589
  • Ter: 93.3806

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 8
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 14000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Ter
0.2448 0.4230 1000 0.2484 0.2369 95.6659
0.22 0.8460 2000 0.2238 0.4532 93.8534
0.1998 1.2690 3000 0.2154 0.5837 92.9078
0.1903 1.6920 4000 0.2099 0.8498 92.0410
0.175 2.1151 5000 0.2070 1.1658 93.2230
0.1675 2.5381 6000 0.2051 1.6545 93.2230
0.1684 2.9611 7000 0.2039 1.3913 92.1986
0.1573 3.3841 8000 0.2032 1.5844 92.0410
0.1564 3.8071 9000 0.2027 1.0087 104.3341
0.1532 4.2301 10000 0.2018 2.1594 92.9866
0.1486 4.6531 11000 0.2019 2.2898 93.8534
0.1576 5.0761 12000 0.2018 2.0100 92.4350
0.1439 5.4992 13000 0.2016 2.0117 93.3806
0.1486 5.9222 14000 0.2017 2.0589 93.3806

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.1
  • Tokenizers 0.21.1