Any code demonstration to run the GGUF model?

#2
by tcapitss24 - opened

Would you provide demo to run the model on llama.cpp or vllm?
I thought I can serve it via Ollama. However, Ollama seems doesnt' not suuport it yet.

Yes, ollama does not support it. Yes you can use in the other ones you mentioned.
For example with llamacpp:
llama-server --port 9090 --n-gpu-layers 99 --ctx-size 65536 --model Chandra-OCR-Q8_0.gguf --mproj mmproj-F32.gguf
And you can connect to it from you programs, through the endpoint http://localhost:9090/v1
It also runs a small webserver to test it, to try it out with http://localhost:9090

Yes, ollama does not support it. Yes you can use in the other ones you mentioned.
For example with llamacpp:
llama-server --port 9090 --n-gpu-layers 99 --ctx-size 65536 --model Chandra-OCR-Q8_0.gguf --mproj mmproj-F32.gguf
And you can connect to it from you programs, through the endpoint http://localhost:9090/v1
It also runs a small webserver to test it, to try it out with http://localhost:9090

Thanks for this.
This worked for me.
llama-server --port 9090 --n-gpu-layers 99 --ctx-size 65536 --model Chandra-OCR-Q8_0.gguf --mmproj mmproj-F32.gguf

This comment has been hidden (marked as Off-Topic)

I have the model running on a VM but when i try to send a request with a prompt and an image it just returns a response with the parameters the model is using and based on my prompt. It seems like it doesn't even considers the image i'm sending. Can anyone provide a valid request with an image and a prompt, because i tried many options?

With what are you running it? I use llama.cpp. As the post above with llama-server. I just paste an image in the Web GUI, and tell it "describe this", and it anwsers. Even without a prompt and just an image, it will anser, as this a OCR model

Thank you! I was trying to send multiple request with batches of images, but i got that working now, and also the GUI is working fine. Now the thing is that i cant replicate the responses i get from the GUI so i receive the same results using a script for sending request. I know there is a lot of internal preprocessing going on when you use the GUI both with the image and the prompt itself, but the responses from the GUI are perfect and i need them to be for my project. Any idea how can i check exactly what parameters and what preprocessing is going on so i can try to replicate them?

Try to send it an image one at time, without a prompt and see what it returns

Sign up or log in to comment