whisper-medium-en-cv-6.1

This model is a fine-tuned version of openai/whisper-medium.en on the Common Voice 17.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1564
  • Wer: 35.3644

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 48
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 210
  • training_steps: 2100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
No log 0 0 2.4185 46.5401
0.8149 0.1429 300 1.0591 38.1506
0.2115 1.1429 600 1.0779 40.8757
0.0598 2.1429 900 1.1087 36.4666
0.0216 3.1429 1200 1.1280 35.9155
0.0089 4.1429 1500 1.1617 35.1806
0.0024 5.1429 1800 1.1517 34.9357
0.0012 6.1429 2100 1.1564 35.3644

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.1
  • Tokenizers 0.21.1
Downloads last month
-
Safetensors
Model size
0.8B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for xbilek25/whisper-medium-en-cv-6.1

Finetuned
(90)
this model

Dataset used to train xbilek25/whisper-medium-en-cv-6.1

Evaluation results