mazesmazes commited on
Commit
c5d4b32
·
verified ·
1 Parent(s): 3586b8d

Training in progress, step 500

Browse files
README.md CHANGED
@@ -1,73 +1,207 @@
1
  ---
2
- license: mit
3
- language:
4
- - en
5
- datasets:
6
- - speechbrain/LoquaciousSet
7
- base_model:
8
- - openai/whisper-large-v3-turbo
9
- - HuggingFaceTB/SmolLM3-3B
10
- pipeline_tag: automatic-speech-recognition
11
  tags:
12
- - asr
13
- - speech-recognition
14
- - audio
15
- - smollm
16
- - whisper
17
- - mlp
18
  ---
19
 
20
- # Tiny Audio
21
 
22
- A speech recognition model trained in 24 hours on a single GPU for ~$12. Built with the [Tiny Audio](https://github.com/alexkroman/tiny-audio) codebase—a minimal, hackable framework for training ASR models.
23
 
24
- ## Architecture
25
 
26
- ```
27
- Audio (16kHz) → Whisper Encoder (frozen) → MLP Projector (trained) → SmolLM3-3B (frozen) → Text
28
- ```
29
 
30
- **MLP Projector:**
31
- - Convolutional downsampling: 4x sequence compression via two stride-2 conv layers
32
- - Linear (1280 → 2048) → GELU → Linear (2048 → 2048)
33
- - Output normalization: RMSNorm
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  ## Training Details
36
 
37
- | | |
38
- |---|---|
39
- | **Dataset** | LoquaciousSet (25,000 hours) |
40
- | **Hardware** | Single NVIDIA A40 40GB |
41
- | **Training Time** | ~24 hours |
42
- | **Cost** | ~$12 |
43
- | **Trainable Parameters** | ~12M (projector only) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
- ## Performance
46
 
47
- **Word Error Rate (WER): 12.14%** on LoquaciousSet test set.
48
 
49
- See the [community leaderboard](https://github.com/alexkroman/tiny-audio#leaderboard) for comparisons.
50
 
51
- ## Usage
52
 
53
- ```python
54
- from transformers import pipeline
55
 
56
- pipe = pipeline("automatic-speech-recognition", model="mazesmazes/tiny-audio", trust_remote_code=True)
57
 
58
- result = pipe("path/to/audio.wav")
59
- print(result["text"])
60
- ```
61
 
62
- ## Limitations
63
 
64
- - English only
65
- - Optimized for 16kHz audio; other sample rates are resampled automatically
66
- - Performance may degrade on heavily accented speech, noisy environments, or domain-specific jargon
67
- - Maximum audio length limited by context window
68
 
69
- ## Learn More
 
70
 
71
- - **[Train your own model](https://github.com/alexkroman/tiny-audio)** — The full codebase with training scripts
72
- - **[Free 3-hour course](https://github.com/alexkroman/tiny-audio/blob/main/docs/course/0-course-overview.md)** — Build your own ASR system from scratch
73
- - **[Submit to leaderboard](https://github.com/alexkroman/tiny-audio#leaderboard)** — Share your trained model
 
1
  ---
2
+ base_model: Qwen/Qwen3-1.7B
3
+ library_name: peft
4
+ pipeline_tag: text-generation
 
 
 
 
 
 
5
  tags:
6
+ - base_model:adapter:Qwen/Qwen3-1.7B
7
+ - lora
8
+ - transformers
 
 
 
9
  ---
10
 
11
+ # Model Card for Model ID
12
 
13
+ <!-- Provide a quick summary of what the model is/does. -->
14
 
 
15
 
 
 
 
16
 
17
+ ## Model Details
18
+
19
+ ### Model Description
20
+
21
+ <!-- Provide a longer summary of what this model is. -->
22
+
23
+
24
+
25
+ - **Developed by:** [More Information Needed]
26
+ - **Funded by [optional]:** [More Information Needed]
27
+ - **Shared by [optional]:** [More Information Needed]
28
+ - **Model type:** [More Information Needed]
29
+ - **Language(s) (NLP):** [More Information Needed]
30
+ - **License:** [More Information Needed]
31
+ - **Finetuned from model [optional]:** [More Information Needed]
32
+
33
+ ### Model Sources [optional]
34
+
35
+ <!-- Provide the basic links for the model. -->
36
+
37
+ - **Repository:** [More Information Needed]
38
+ - **Paper [optional]:** [More Information Needed]
39
+ - **Demo [optional]:** [More Information Needed]
40
+
41
+ ## Uses
42
+
43
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
44
+
45
+ ### Direct Use
46
+
47
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
48
+
49
+ [More Information Needed]
50
+
51
+ ### Downstream Use [optional]
52
+
53
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
54
+
55
+ [More Information Needed]
56
+
57
+ ### Out-of-Scope Use
58
+
59
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
60
+
61
+ [More Information Needed]
62
+
63
+ ## Bias, Risks, and Limitations
64
+
65
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
66
+
67
+ [More Information Needed]
68
+
69
+ ### Recommendations
70
+
71
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
72
+
73
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
74
+
75
+ ## How to Get Started with the Model
76
+
77
+ Use the code below to get started with the model.
78
+
79
+ [More Information Needed]
80
 
81
  ## Training Details
82
 
83
+ ### Training Data
84
+
85
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
+
87
+ [More Information Needed]
88
+
89
+ ### Training Procedure
90
+
91
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
92
+
93
+ #### Preprocessing [optional]
94
+
95
+ [More Information Needed]
96
+
97
+
98
+ #### Training Hyperparameters
99
+
100
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
101
+
102
+ #### Speeds, Sizes, Times [optional]
103
+
104
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
105
+
106
+ [More Information Needed]
107
+
108
+ ## Evaluation
109
+
110
+ <!-- This section describes the evaluation protocols and provides the results. -->
111
+
112
+ ### Testing Data, Factors & Metrics
113
+
114
+ #### Testing Data
115
+
116
+ <!-- This should link to a Dataset Card if possible. -->
117
+
118
+ [More Information Needed]
119
+
120
+ #### Factors
121
+
122
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
123
+
124
+ [More Information Needed]
125
+
126
+ #### Metrics
127
+
128
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
129
+
130
+ [More Information Needed]
131
+
132
+ ### Results
133
+
134
+ [More Information Needed]
135
+
136
+ #### Summary
137
+
138
+
139
+
140
+ ## Model Examination [optional]
141
+
142
+ <!-- Relevant interpretability work for the model goes here -->
143
+
144
+ [More Information Needed]
145
+
146
+ ## Environmental Impact
147
+
148
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
149
+
150
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
151
+
152
+ - **Hardware Type:** [More Information Needed]
153
+ - **Hours used:** [More Information Needed]
154
+ - **Cloud Provider:** [More Information Needed]
155
+ - **Compute Region:** [More Information Needed]
156
+ - **Carbon Emitted:** [More Information Needed]
157
+
158
+ ## Technical Specifications [optional]
159
+
160
+ ### Model Architecture and Objective
161
+
162
+ [More Information Needed]
163
+
164
+ ### Compute Infrastructure
165
+
166
+ [More Information Needed]
167
+
168
+ #### Hardware
169
+
170
+ [More Information Needed]
171
+
172
+ #### Software
173
+
174
+ [More Information Needed]
175
+
176
+ ## Citation [optional]
177
+
178
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
179
+
180
+ **BibTeX:**
181
+
182
+ [More Information Needed]
183
+
184
+ **APA:**
185
 
186
+ [More Information Needed]
187
 
188
+ ## Glossary [optional]
189
 
190
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
191
 
192
+ [More Information Needed]
193
 
194
+ ## More Information [optional]
 
195
 
196
+ [More Information Needed]
197
 
198
+ ## Model Card Authors [optional]
 
 
199
 
200
+ [More Information Needed]
201
 
202
+ ## Model Card Contact
 
 
 
203
 
204
+ [More Information Needed]
205
+ ### Framework versions
206
 
207
+ - PEFT 0.18.0
 
 
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5550ad48c356b76f9eece47dbaaf86d45a2ca589ed276825c6a7c310cbc58fca
3
  size 51395296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee4ffed957351d0628c89aa69b443fc4ef0650f75af66d1ed30e7205f2e3f940
3
  size 51395296
config.json CHANGED
@@ -156,7 +156,6 @@
156
  ]
157
  ],
158
  "encoder_dim": 1280,
159
- "encoder_stride": 2,
160
  "inference_warmup_tokens": 10,
161
  "label_smoothing": 0.0,
162
  "length_penalty": 1.0,
@@ -168,7 +167,7 @@
168
  "v_proj",
169
  "q_proj"
170
  ],
171
- "max_new_tokens": 96,
172
  "model_dtype": "bfloat16",
173
  "model_type": "asr_model",
174
  "no_repeat_ngram_size": 0,
@@ -176,6 +175,7 @@
176
  "num_experts": 4,
177
  "num_experts_per_tok": 2,
178
  "pipeline_tag": "automatic-speech-recognition",
 
179
  "projector_dropout": 0.0,
180
  "projector_hidden_dim": null,
181
  "projector_init_std": 0.02,
 
156
  ]
157
  ],
158
  "encoder_dim": 1280,
 
159
  "inference_warmup_tokens": 10,
160
  "label_smoothing": 0.0,
161
  "length_penalty": 1.0,
 
167
  "v_proj",
168
  "q_proj"
169
  ],
170
+ "max_new_tokens": 256,
171
  "model_dtype": "bfloat16",
172
  "model_type": "asr_model",
173
  "no_repeat_ngram_size": 0,
 
175
  "num_experts": 4,
176
  "num_experts_per_tok": 2,
177
  "pipeline_tag": "automatic-speech-recognition",
178
+ "pretrained_model_path": "mazesmazes/tiny-audio-glm",
179
  "projector_dropout": 0.0,
180
  "projector_hidden_dim": null,
181
  "projector_init_std": 0.02,
generation_config.json CHANGED
@@ -1,8 +1,11 @@
1
  {
2
  "bos_token_id": 151643,
3
- "eos_token_id": 151645,
 
 
 
4
  "length_penalty": 1.0,
5
- "max_new_tokens": 96,
6
  "no_repeat_ngram_size": 0,
7
  "num_beams": 1,
8
  "pad_token_id": 151643,
 
1
  {
2
  "bos_token_id": 151643,
3
+ "eos_token_id": [
4
+ 151645,
5
+ 151643
6
+ ],
7
  "length_penalty": 1.0,
8
+ "max_new_tokens": 256,
9
  "no_repeat_ngram_size": 0,
10
  "num_beams": 1,
11
  "pad_token_id": 151643,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf57c633a2a96c5d6ea7fdaf2bc24a8732cf974f6604c6abfb403ae26b21ed63
3
  size 58732960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8eefb8b3c929d87a0d41b09945960274bd8951711ad3fadecb3d6e47285943c
3
  size 58732960
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ea3da59bc817432d83d2d2bdafd7f484c02b1fa6db73ed2c7e254e07cc312657
3
  size 5265
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c93f8fb35c19525a996efb0f6edf879be72c92154ebd755aa65986d270a2e30
3
  size 5265