runtime error

Exit code: 1. Reason: er.json: 100%|██████████| 6.72M/6.72M [00:00<00:00, 28.4MB/s] special_tokens_map.json: 0%| | 0.00/968 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 968/968 [00:00<00:00, 7.30MB/s] Traceback (most recent call last): File "/app/app.py", line 6, in <module> model = SentenceTransformer("sbintuitions/sarashina-embedding-v2-1b") File "/usr/local/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 327, in __init__ modules, self.module_kwargs = self._load_sbert_model( File "/usr/local/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 2305, in _load_sbert_model module = module_class.load( File "/usr/local/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 365, in load return cls(model_name_or_path=model_name_or_path, **init_kwargs) File "/usr/local/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 102, in __init__ self.tokenizer = AutoTokenizer.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 1156, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2113, in from_pretrained return cls._from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2359, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/llama/tokenization_llama_fast.py", line 154, in __init__ super().__init__( File "/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 108, in __init__ raise ValueError( ValueError: Cannot instantiate this tokenizer from a slow version. If it's based on sentencepiece, make sure you have sentencepiece installed.

Container logs:

Fetching error logs...