Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
qnixsynapse
/
Gemma-4-26B-MXFP4-BF16-GGUF
like
5
GGUF
conversational
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
No model card
Downloads last month
2,788
GGUF
Model size
25B params
Architecture
gemma4
Chat template
Hardware compatibility
Log In
to add your hardware
16-bit
BF16
17.5 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support