Whisper Large-v3 CT2 Optimized for Code-Switching
CTranslate2 optimized version for faster inference with faster-whisper.
Performance
- 3-5x faster than standard transformers
- Optimized for GPU inference
- Lower memory footprint
Usage
Use with faster-whisper library. See inference examples in the repository.
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Lokesh2482/whisper-large-v3-cs-lora-ct2
Base model
openai/whisper-large-v3