Vermilion-Sage-12B / README.md
Vortex5's picture
Update README.md
38db11b verified
---
base_model:
- inflatebot/MN-12B-Mag-Mell-R1
- crestf411/MN-Slush
- Retreatcost/Ollpheist-12B
- Vortex5/Poetic-Nexus-12B
library_name: transformers
tags:
- mergekit
- merge
- roleplay
---
![ComfyUI_00118_](https://cdn-uploads.huggingface.co/production/uploads/6669a3a617b838fda45637b8/74_fuBnez8kSN-RQARiJU.png)
# Vermilion-Sage-12B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Multi-SLERP](https://goddard.blog/posts/multislerp-wow-what-a-cool-idea) merge method using [Vortex5/Poetic-Nexus-12B](https://huggingface.co/Vortex5/Poetic-Nexus-12B) as a base.
### Models Merged
The following models were included in the merge:
* [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
* [crestf411/MN-Slush](https://huggingface.co/crestf411/MN-Slush)
* [Retreatcost/Ollpheist-12B](https://huggingface.co/Retreatcost/Ollpheist-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: crestf411/MN-Slush # Coherence
parameters:
weight: [1.0, 0.8, 0.6, 0.5, 0.3, 0.1, 0.0, 0.0]
- model: Retreatcost/Ollpheist-12B # Creative
parameters:
weight: [0.0, 0.1, 0.3, 0.6, 0.7, 0.5, 0.5, 0.4]
- model: inflatebot/MN-12B-Mag-Mell-R1 # Flowery prose
parameters:
weight: [0.2, 0.3, 0.4, 0.5, 0.8, 0.9, 0.9, 0.8]
base_model: Vortex5/Poetic-Nexus-12B
merge_method: multislerp
dtype: bfloat16
parameters:
normalize: true
tokenizer:
source: union
```