Update README.md
Browse files
README.md
CHANGED
|
@@ -26,7 +26,7 @@ FANformer-1B is a 1.1-billion-parameter autoregressive language model pre-traine
|
|
| 26 |
---
|
| 27 |
|
| 28 |
### **Training Details**
|
| 29 |
-
- **Hardware:**
|
| 30 |
- **Training Data:** Subset of Dolma Dataset (OLMo-1B’s training corpus)
|
| 31 |
- **Maximum Context Length:** 2,048 tokens
|
| 32 |
|
|
|
|
| 26 |
---
|
| 27 |
|
| 28 |
### **Training Details**
|
| 29 |
+
- **Hardware:** 80 A100 40G GPUs
|
| 30 |
- **Training Data:** Subset of Dolma Dataset (OLMo-1B’s training corpus)
|
| 31 |
- **Maximum Context Length:** 2,048 tokens
|
| 32 |
|