Pruned NLLB ExecutorTorch Model

This is a pruned version of the NLLB-200 model exported to ExecutorTorch (.pte) format for mobile deployment.

Model Information

  • Base Model: NLLB-200-distilled-600M
  • Format: ExecutorTorch (.pte)
  • Pruned Languages: eng_Latn, deu_Latn, als_Latn, ell_Grek, ita_Latn, tur_Latn
  • Quantization: No (FP32)
  • Purpose: On-device translation for mobile applications

Files

  • nllb_model.pte - ExecutorTorch model file
  • tokenizer.json - Tokenizer configuration
  • Other supporting files

Usage

This model is designed for use with react-native-executorch in mobile applications.

import { useExecutorchModule } from 'react-native-executorch';

const model = useExecutorchModule({
  modelSource: require('./nllb_model.pte'),
});

Supported Languages

The model supports translation between the following languages:

  • eng_Latn
  • deu_Latn
  • als_Latn
  • ell_Grek
  • ita_Latn
  • tur_Latn

License

This model follows the license of the base NLLB-200 model (CC-BY-NC-4.0).

Citation

If you use this model, please cite the original NLLB paper:

@article{nllb2022,
  title={No Language Left Behind: Scaling Human-Centered Machine Translation},
  author={Costa-jussà, Marta R. and Cross, James and Çelebi, Onur and others},
  journal={arXiv preprint arXiv:2207.04672},
  year={2022}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for edyrkaj/nllb-executorch-pruned