Naphula commited on
Commit
458fd98
·
verified ·
1 Parent(s): 1c269be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -10,8 +10,9 @@ pinned: false
10
  # Model Tools by Naphula
11
  Tools to enhance LLM quantizations and merging
12
 
13
- # [graph_v4.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/graph_v4.py)
14
  - Merge models in minutes instead of hours on low VRAM. For a 3060/3060 Ti user: This script enables functionality that is otherwise impossible (merging 70B models or large 7B merges with `--cuda`) without OOM. [More details here](https://huggingface.co/spaces/Naphula/model_tools/blob/main/mergekit_low-VRAM-graph_patch.md)
 
15
 
16
  # [fp32_to_fp16.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fp32_to_fp16.py)
17
  - Converts FP32 to FP16 safetensors
 
10
  # Model Tools by Naphula
11
  Tools to enhance LLM quantizations and merging
12
 
13
+ # [graph_v18.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/graph_v18.py)
14
  - Merge models in minutes instead of hours on low VRAM. For a 3060/3060 Ti user: This script enables functionality that is otherwise impossible (merging 70B models or large 7B merges with `--cuda`) without OOM. [More details here](https://huggingface.co/spaces/Naphula/model_tools/blob/main/mergekit_low-VRAM-graph_patch.md)
15
+ - Update: v18 is much faster than v4 and replaces the trial-and-error loop with an adaptive math-based calculator (using GrimJim's measure.py logic)
16
 
17
  # [fp32_to_fp16.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fp32_to_fp16.py)
18
  - Converts FP32 to FP16 safetensors