Instructions to use DancingIguana/music-generation with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use DancingIguana/music-generation with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DancingIguana/music-generation")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("DancingIguana/music-generation") model = AutoModelForCausalLM.from_pretrained("DancingIguana/music-generation") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use DancingIguana/music-generation with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "DancingIguana/music-generation" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DancingIguana/music-generation", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/DancingIguana/music-generation
- SGLang
How to use DancingIguana/music-generation with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "DancingIguana/music-generation" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DancingIguana/music-generation", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "DancingIguana/music-generation" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "DancingIguana/music-generation", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use DancingIguana/music-generation with Docker Model Runner:
docker model run hf.co/DancingIguana/music-generation
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("DancingIguana/music-generation")
model = AutoModelForCausalLM.from_pretrained("DancingIguana/music-generation")music-generation
This model a trained from scratch version of distilgpt2 on a dataset where the text represents musical notes. The dataset consists of one stream of notes from MIDI files (the stream with most notes), where all of the melodies were transposed either to C major or A minor. Also, the BPM of the song is ignored, the duration of each note is based on its quarter length.
Each element in the melody is represented by a series of letters and numbers with the following structure.
- For a note: ns[pitch of the note as a string]s[duration]
- Examples: nsC4s0p25, nsF7s1p0,
- For a rest: rs[duration]:
- Examples: rs0p5, rs1q6
- For a chord: cs[number of notes in chord]s[pitches of chords separated by "s"]s[duration]
- Examples: cs2sE7sF7s1q3, cs2sG3sGw3s0p25
The following special symbols are replaced in the strings by the following:
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
Training results
Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
- Downloads last month
- 1,478
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DancingIguana/music-generation")