runtime error

Exit code: 1. Reason: οΏ½οΏ½β–‰ | 3.46G/3.86G [00:18<00:01, 401MB/s] model-00004-of-00005.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.86G/3.86G [00:19<00:00, 201MB/s] model-00005-of-00005.safetensors: 0%| | 0.00/1.09G [00:00<?, ?B/s] model-00005-of-00005.safetensors: 2%|▏ | 17.3M/1.09G [00:01<01:30, 11.9MB/s] model-00005-of-00005.safetensors: 14%|β–ˆβ– | 151M/1.09G [00:02<00:14, 63.5MB/s]  model-00005-of-00005.safetensors: 51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 554M/1.09G [00:04<00:03, 151MB/s]  model-00005-of-00005.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.09G/1.09G [00:05<00:00, 201MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 1, in <module> from inference import infer_single_image, model, processor File "/home/user/app/inference.py", line 26, in <module> base_model = Qwen2_5_VLForConditionalGeneration.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 311, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4814, in from_pretrained device_map = _get_device_map(model, device_map, max_memory, hf_quantizer, torch_dtype, keep_in_fp32_regex) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1460, in _get_device_map hf_quantizer.validate_environment(device_map=device_map) File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 117, in validate_environment raise ValueError( ValueError: Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set `llm_int8_enable_fp32_cpu_offload=True` and pass a custom `device_map` to `from_pretrained`. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.

Container logs:

Fetching error logs...