--- title: Model Tools emoji: 📚 colorFrom: pink colorTo: yellow sdk: static pinned: false --- # Model Tools by Naphula Tools to enhance LLM quantizations and merging # [graph_v18.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/graph_v18.py) - Merge models in minutes instead of hours on low VRAM. For a 3060/3060 Ti user: This script enables functionality that is otherwise impossible (merging 70B models or large 7B merges with `--cuda`) without OOM. [More details here](https://huggingface.co/spaces/Naphula/model_tools/blob/main/mergekit_low-VRAM-graph_patch.md) - Update: v18 is much faster than v4 and replaces the trial-and-error loop with an adaptive math-based calculator (using GrimJim's measure.py logic) # [metadata_audit.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/metadata_audit.py) - Checks multiple models within subdirectories for vocab or rope mismatch (useful for large merges). Calibrated for Mistral Nemo 12B by default. # [fp32_to_fp16.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/fp32_to_fp16.py) - Converts FP32 to FP16 safetensors # [textonly_ripper_v2.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/textonly_ripper_v2.py) - Converts a sharded, multimodal (text and vision) model into a text-only version. Readme at [textonly_ripper.md](https://huggingface.co/spaces/Naphula/model_tools/blob/main/textonly_ripper.md) # [vocab_resizer.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/vocab_resizer.py) - Converts models with larger vocab_sizes to a standard size (default 131072 Mistral 24B) for use with mergekit. Note that `tokenizer.model` must be manually copied into the `/fixed/` folder. # [lm_head_remover.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/lm_head_remover.py) - This script will load a "fat" 18.9GB model (default Gemma 9B), force it to tie the weights (deduplicating the lm_head), and re-save it. This will drop the file size to ~17.2GB and make it compatible with the others. # [model_index_json_generator.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/model_index_json_generator.py) - Generates a missing `model.safetensors.index.json` file. Useful for cases where safetensors may have been sharded at the wrong size. # [folder_content_combiner_anyfiles.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/folder_content_combiner_anyfiles.py) - Combines all files in the script's current directory into a single output file, sorted alphabetically. # [GGUF Repo Suite](https://huggingface.co/spaces/Naphula/gguf-repo-suite) - Create and quantize Hugging Face models # [Failed Experiment gguf_to_safetensors_v2.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/gguf_to_safetensors_v2.py) - Unsuccessful attempt by Gemini to patch the gguf_to_safetensors script. Missing json files are hard to reconstruct. Also see [safetensors_meta_ripper_v1.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/safetensors_meta_ripper_v1.py) and [tokenizer_ripper_v1.py](https://huggingface.co/spaces/Naphula/model_tools/blob/main/tokenizer_ripper_v1.py) # [Markdown Viewer](https://huggingface.co/spaces/Naphula/Portable_Offline_Markdown_Viewer) - Portable Offline Markdown Viewer # [Markdown to SMF](https://huggingface.co/spaces/Naphula/model_tools/blob/main/md_to_smf.py) - Converts a Markdown string to an SMF-compatible BBCode string. Not perfect—sometimes misses double bold tags. # [Quant Clone](https://github.com/electroglyph/quant_clone) - A tool which allows you to recreate UD quants such as Q8_K_XL. Examples: [Mistral 24B](https://huggingface.co/spaces/Naphula/model_tools/raw/main/Mistral-Small-3.2-24B-Instruct-2506-UD-Q8_K_XL_UD.txt), [Mistral 7B](https://huggingface.co/spaces/Naphula/model_tools/raw/main/Warlock-7B-v2-Q8_K_XL.txt) # [Text Analysis Suite v1.5](https://huggingface.co/spaces/Naphula/TAS_1.5) - Analyze text files with advanced metrics