How Much GPU Memory (VRAM) is Needed to Run This Model?

#3
by Vishva007 - opened

I'm deploying the datalab-to/chandra OCR model for document processing and need to understand GPU memory requirements for inference.

Key Questions:

What's the minimum GPU VRAM needed for FP16/BF16 inference?

Does the model support quantization (AWQ, GPTQ, 8-bit)?

How does memory scale with variable document resolution?

Any real-world experience running on RTX 4090, A100, or consumer GPUs?

Context:

Python transformers library via HF

Docker with NVIDIA CUDA support

Batch document processing optimization

Any deployment insights appreciated!

Here are my test results:
Chandra method: HF
GPU: RTX 5090 32GB
Memory usage: around 18.9 GB (utilization 30–50%)
PDF: 40 MB, 340 pages
Speed: roughly 2–3 minutes per page
Just for reference.

Thanks a lot @LastXuanShen42 ! That's exactly the data I needed. Appreciate you sharing the benchmarks!

Can you fellows help me in this. I'm trying to run chandra on kaggle. But to run the ocr model even on just one image it is demanding for 36GB of gpu space. I'm unable to do anything about this but I really need this model. Can you help me with this?

Sign up or log in to comment