statmt/cc100
Updated • 2.21k • 106
How to use Andrija/SRoBERTa-F with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("fill-mask", model="Andrija/SRoBERTa-F") # Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("Andrija/SRoBERTa-F")
model = AutoModelForMaskedLM.from_pretrained("Andrija/SRoBERTa-F")Trained on 43GB datasets that contain Croatian and Serbian language for one epochs (9.6 mil. steps, 3 epochs). Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr datasets
Validation number of exampels run for perplexity:1620487 sentences Perplexity:6.02 Start loss: 8.6 Final loss: 2.0 Thoughts: Model could be trained more, the training did not stagnate.
| Model | #params | Arch. | Training data |
|---|---|---|---|
Andrija/SRoBERTa-F |
80M | Fifth | Leipzig Corpus, OSCAR, srWac, hrWac, cc100-hr and cc100-sr (43 GB of text) |