Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -10,8 +10,9 @@ pinned: false
|
|
| 10 |
# Model Tools by Naphula
|
| 11 |
Tools to enhance LLM quantizations and merging
|
| 12 |
|
| 13 |
-
# [
|
| 14 |
- Merge models in minutes instead of hours on low VRAM. For a 3060/3060 Ti user: This script enables functionality that is otherwise impossible (merging 70B models or large 7B merges with `--cuda`) without OOM. [More details here](https://huggingface.co/spaces/Naphula/model_tools/blob/main/mergekit_low-VRAM-graph_patch.md)
|
|
|
|
| 15 |
|
| 16 |
# [fp32_to_fp16.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fp32_to_fp16.py)
|
| 17 |
- Converts FP32 to FP16 safetensors
|
|
|
|
| 10 |
# Model Tools by Naphula
|
| 11 |
Tools to enhance LLM quantizations and merging
|
| 12 |
|
| 13 |
+
# [graph_v18.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/graph_v18.py)
|
| 14 |
- Merge models in minutes instead of hours on low VRAM. For a 3060/3060 Ti user: This script enables functionality that is otherwise impossible (merging 70B models or large 7B merges with `--cuda`) without OOM. [More details here](https://huggingface.co/spaces/Naphula/model_tools/blob/main/mergekit_low-VRAM-graph_patch.md)
|
| 15 |
+
- Update: v18 is much faster than v4 and replaces the trial-and-error loop with an adaptive math-based calculator (using GrimJim's measure.py logic)
|
| 16 |
|
| 17 |
# [fp32_to_fp16.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fp32_to_fp16.py)
|
| 18 |
- Converts FP32 to FP16 safetensors
|