Text Generation
Transformers
PyTorch
TensorBoard
Safetensors
bloom
Eval Results (legacy)
text-generation-inference
Instructions to use bigscience/bloom with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bigscience/bloom with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="bigscience/bloom")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom") model = AutoModelForCausalLM.from_pretrained("bigscience/bloom") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use bigscience/bloom with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "bigscience/bloom" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/bigscience/bloom
- SGLang
How to use bigscience/bloom with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "bigscience/bloom" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "bigscience/bloom" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "bigscience/bloom", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use bigscience/bloom with Docker Model Runner:
docker model run hf.co/bigscience/bloom
Mutli turn support
3
#282 opened over 1 year ago
by
ansumanbehera
training BLOOM/BLOOMZ for text summarization
3
#277 opened about 2 years ago
by
almonzer
Doesn't work.
1
#276 opened about 2 years ago
by
PacmanGraphics
Is is feasible to use this checkpoint for multi node inference via deepspeed Zero stage-3
3
#275 opened about 2 years ago
by
YuTian8328
Do bloom models use <s> and </s> tokens?
1
#274 opened over 2 years ago
by
abuelnasr
[AUTOMATED] Model Memory Requirements
#273 opened over 2 years ago
by
model-sizer-bot
Adding Evaluation Results
#272 opened over 2 years ago
by
leaderboard-pr-bot
Bloom Tokenization
#269 opened over 2 years ago
by
Niazi
How to quantize bloom with 4-bit
2
#268 opened over 2 years ago
by
char-1ee
Please, check the intermediate models you uploaded seriously!!!
#267 opened over 2 years ago
by
ErikaaWang
AttributeError: 'dict' object has no attribute 'full_determinism' when trying to train the model
3
#266 opened over 2 years ago
by
anuruddhak
base_model_prefix = "transformer"
6
#265 opened over 2 years ago
by
Cyrile
How do i increase or decrease size of response in API call?
1
#263 opened over 2 years ago
by
Iluvmelons
Tokenization Vocabulary
#262 opened over 2 years ago
by
kargaranamir
getting this bug on the 560 m model
#260 opened almost 3 years ago
by
surya-narayanan
Does Bloom adhere to the EU responsibly sourced data initiative
👍 2
#258 opened almost 3 years ago
by
HunleyExpress
Can I use Bloom on an a5000 GPU?
#257 opened almost 3 years ago
by
smcg1579
Please add Persian language
👍 2
2
#256 opened almost 3 years ago
by
AliEdalat
Marshallese Language has no AI model or documentation and will go extinct in 10 Years
1
#255 opened almost 3 years ago
by
SwiftyDust
How to design a inference performance benchmark for text model?
👍 1
#254 opened almost 3 years ago
by
ricardoooo
bloom for both generation and validation (good choice?)
#249 opened almost 3 years ago
by
lars2030
Output getting truncated while using Langchain
1
#247 opened about 3 years ago
by
hepbc
Extending BLOOM to Dutch - tips for hyperparameters
#246 opened about 3 years ago
by
matthieumeeus
0423
#241 opened about 3 years ago
by
FarrellZhang
Choosing sampling or greedy.
1
#240 opened about 3 years ago
by
Tristo
Data Collator class to use for BLOOM
#238 opened about 3 years ago
by
monta
Update README.md
1
#235 opened about 3 years ago
by
Petiinoue
Fine-tuning BLOOM for Summarization with Trainer API
👍 2
#234 opened about 3 years ago
by
monta
Heres all my bloom prompts for free
👍 2
1
#222 opened about 3 years ago
by
NigelTheMaker
Why does the ROOTS Corpus not include German language?
2
#221 opened about 3 years ago
by
akratz
Is Bloom still being improved and updated or was it just released and abandoned?
2
#219 opened about 3 years ago
by
Alysialr
What are the best prompt engineering for bloom?
1
#218 opened about 3 years ago
by
Imran1
The bloom7b model not support contrastive search nor do_sample with peft and just repeating the output
5
#217 opened about 3 years ago
by
Imran1
Which is the maximum Rate Limit for free HF accounts?
2
#213 opened about 3 years ago
by
otrujillo
Batch Processing / Parallelism
1
#203 opened about 3 years ago
by
ymoslem
Is the 14 programming Laungugae dataset uploaded on hugging face ? Any other option to doenload the data
1
#201 opened about 3 years ago
by
MukeshSharma
How to use BLOOM for error detection in a sentence?
#198 opened about 3 years ago
by
AliAsif
Hardware Requirements for Fine Tuning.
#197 opened about 3 years ago
by
Haziqsayyed
Infrastructure required to fine tune bloom 176B model on downstream tasks
2
#196 opened over 3 years ago
by
rajabmondal7
Minimum requirements for running inference on 176B model
7
#195 opened over 3 years ago
by
gsmoon97
Bloom Setup
👍 1
2
#194 opened over 3 years ago
by
Mrdrifter
Fine tuning?
❤️ 3
#192 opened over 3 years ago
by
imwide
Training or Fine-tuning the Bloom AI Model on my own Dataset
❤️ 1
2
#187 opened over 3 years ago
by
NicolasExo
Fine tuning for text generation
1
#185 opened over 3 years ago
by
nonamenora
Training dataset shuffling/mixing
#180 opened over 3 years ago
by
wish
Bloom Inference API has been reporting as overloaded all day (1/29/23)
1
#179 opened over 3 years ago
by
bicx
Ideas to improve Fine Tuned BLOOM 560 for dialogue using LIGHT dataset
1
#176 opened over 3 years ago
by
andrewnoel
How to use BLOOM for text summarization ?
5
#172 opened over 3 years ago
by
ankit5678
BloomTokenizerFast does not exist
8
#170 opened over 3 years ago
by
hiddenchamp
Something between BLOOM-176B and BLOOM-7B1?
1
#169 opened over 3 years ago
by
gameveloster