https://huggingface.co/vanta-research/apollo-astralis-2
#1619
by
oraculus541
- opened
We already tried this model on the day of its release but it unfortinately failed due to missing tokenizer.model
apollo-astralis-2 INFO:hf-to-gguf:Set meta model
apollo-astralis-2 INFO:hf-to-gguf:Set model parameters
apollo-astralis-2 INFO:hf-to-gguf:gguf: context length = 262144
apollo-astralis-2 INFO:hf-to-gguf:gguf: embedding length = 4096
apollo-astralis-2 INFO:hf-to-gguf:gguf: feed forward length = 14336
apollo-astralis-2 INFO:hf-to-gguf:gguf: head count = 32
apollo-astralis-2 INFO:hf-to-gguf:gguf: key-value head count = 8
apollo-astralis-2 INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-05
apollo-astralis-2 INFO:hf-to-gguf:gguf: file type = 1025
apollo-astralis-2 INFO:hf-to-gguf:Set model quantization version
apollo-astralis-2 INFO:hf-to-gguf:Set model tokenizer
apollo-astralis-2 You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
apollo-astralis-2 Traceback (most recent call last):
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 2361, in set_vocab
apollo-astralis-2 self._set_vocab_sentencepiece()
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1249, in _set_vocab_sentencepiece
apollo-astralis-2 tokens, scores, toktypes = self._create_vocab_sentencepiece()
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1266, in _create_vocab_sentencepiece
apollo-astralis-2 raise FileNotFoundError(f"File not found: {tokenizer_path}")
apollo-astralis-2 FileNotFoundError: File not found: apollo-astralis-2/tokenizer.model
apollo-astralis-2
apollo-astralis-2 During handling of the above exception, another exception occurred:
apollo-astralis-2
apollo-astralis-2 Traceback (most recent call last):
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 2364, in set_vocab
apollo-astralis-2 self._set_vocab_llama_hf()
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1351, in _set_vocab_llama_hf
apollo-astralis-2 vocab = gguf.LlamaHfVocab(self.dir_model)
apollo-astralis-2 File "/llmjob/llama.cpp/gguf-py/gguf/vocab.py", line 515, in __init__
apollo-astralis-2 raise TypeError('Llama 3 must be converted with BpeVocab')
apollo-astralis-2 TypeError: Llama 3 must be converted with BpeVocab
apollo-astralis-2
apollo-astralis-2 During handling of the above exception, another exception occurred:
apollo-astralis-2
apollo-astralis-2 Traceback (most recent call last):
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 10418, in <module>
apollo-astralis-2 main()
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 10412, in main
apollo-astralis-2 model_instance.write()
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 671, in write
apollo-astralis-2 self.prepare_metadata(vocab_only=False)
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 792, in prepare_metadata
apollo-astralis-2 self.set_vocab()
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 2367, in set_vocab
apollo-astralis-2 self._set_vocab_gpt2()
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1185, in _set_vocab_gpt2
apollo-astralis-2 tokens, toktypes, tokpre = self.get_vocab_base()
apollo-astralis-2 File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 893, in get_vocab_base
apollo-astralis-2 tokenizer = AutoTokenizer.from_pretrained(self.dir_model)
apollo-astralis-2 File "/llmjob/share/python/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 1125, in from_pretrained
apollo-astralis-2 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
apollo-astralis-2 File "/llmjob/share/python/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2070, in from_pretrained
apollo-astralis-2 return cls._from_pretrained(
apollo-astralis-2 File "/llmjob/share/python/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2316, in _from_pretrained
apollo-astralis-2 tokenizer = cls(*init_inputs, **init_kwargs)
apollo-astralis-2 File "/llmjob/share/python/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 154, in __init__
apollo-astralis-2 super().__init__(
apollo-astralis-2 File "/llmjob/share/python/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 178, in __init__
apollo-astralis-2 super().__init__(**kwargs)
apollo-astralis-2 File "/llmjob/share/python/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1443, in __init__
apollo-astralis-2 self._set_model_specific_special_tokens(special_tokens=self.extra_special_tokens)
apollo-astralis-2 File "/llmjob/share/python/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1181, in _set_model_specific_special_tokens
apollo-astralis-2 self.SPECIAL_TOKENS_ATTRIBUTES = self.SPECIAL_TOKENS_ATTRIBUTES + list(special_tokens.keys())
apollo-astralis-2 AttributeError: 'list' object has no attribute 'keys'
apollo-astralis-2 yes: standard output: Broken pipe
apollo-astralis-2 job finished, status 1
apollo-astralis-2 job-done<0 apollo-astralis-2 noquant 1>
apollo-astralis-2
apollo-astralis-2 NAME: apollo-astralis-2
apollo-astralis-2 TIME: Fri Dec 12 23:26:54 2025
apollo-astralis-2 WORKER: nico1
Hey Nico!
Oraculus let me know that the repo was missing a tokenizer file. Sorry about that! I just got it uploaded. It should work now π
Thanks!
It's queued!
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#apollo-astralis-2-GGUF for quants to appear.