sinamsv0 commited on
Commit
ae260b6
·
verified ·
1 Parent(s): 51f7663

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -17
README.md CHANGED
@@ -1,23 +1,23 @@
1
  ---
2
  base_model: openai/gpt-oss-20b
3
  tags:
4
- - text-generation-inference
5
- - transformers
6
- - unsloth
7
- - gpt_oss
8
- - WALL-E
9
  license: apache-2.0
10
  language:
11
- - en
12
- - de
13
- - fa
14
  ---
15
 
16
- # WALL•E¹Pro
17
 
18
- **WALL•E¹Pro** is a 22B parameter Large Language Model (LLM) specialized in **coding, logical reasoning (thinking), and mathematics**. It is a highly capable model designed for local execution, with a focus on English, Persian, and German.
19
 
20
- **Model Card:** [DreamhubAI/WALL-E-1Pro](https://huggingface.co/DreamhubAI/WALL-E-1Pro)
21
 
22
  ---
23
 
@@ -37,11 +37,11 @@ language:
37
  The primary model and its smaller variant are hosted on Hugging Face.
38
 
39
  **Main Model (22B)**
40
- * **Repository (Full/8-bit)**: [`DreamhubAI/WALL-E-1Pro`](https://huggingface.co/DreamhubAI/WALL-E-1Pro)
41
  * **Hugging Face Transformers** is the main framework for loading and using these models.
42
 
43
  **Smaller Variant**
44
- * **Lightweight Model**: [`DreamhubAI/wall-e-mini`](https://huggingface.co/DreamhubAI/wall-e-mini) - A smaller version for quick testing and less demanding tasks.
45
 
46
  ### System Requirements
47
  **For GPU Execution**
@@ -58,7 +58,7 @@ You can load and use the model with the `transformers` library. The following is
58
  ```python
59
  from transformers import AutoModelForCausalLM, AutoTokenizer
60
 
61
- model_name = "DreamhubAI/WALL-E-1Pro" # or "DreamhubAI/WALL-E-1Pro-8bit"
62
  tokenizer = AutoTokenizer.from_pretrained(model_name)
63
  model = AutoModelForCausalLM.from_pretrained(model_name)
64
 
@@ -70,7 +70,7 @@ print(tokenizer.decode(outputs[0]))
70
 
71
  ## 🛠️ Projects and Ecosystem
72
 
73
- WALL•E User Interface (Web UI)
74
 
75
  · A dedicated interface for interacting with the model is available on GitHub.
76
  · Repository: DreamhubAI/WALL-E
@@ -89,5 +89,4 @@ This model is openly licensed under the Apache License 2.0. This allows for broa
89
 
90
  ---
91
 
92
- **Thank you for your interest in WALL•E¹Pro! We hope this model serves as a powerful tool for your development and research projects in coding, reasoning, and mathematics.**
93
-
 
1
  ---
2
  base_model: openai/gpt-oss-20b
3
  tags:
4
+ - text-generation-inference
5
+ - transformers
6
+ - unsloth
7
+ - gpt_oss
8
+ - NovaAI
9
  license: apache-2.0
10
  language:
11
+ - en
12
+ - de
13
+ - fa
14
  ---
15
 
16
+ # Nova•E¹
17
 
18
+ **Nova•E¹** is a 22B parameter Large Language Model (LLM) specialized in **coding, logical reasoning (thinking), and mathematics**. It is a highly capable model designed for local execution, with a focus on English, Persian, and German.
19
 
20
+ **Model Card:** [DreamhubAI/Nova-E-1](https://huggingface.co/DreamhubAI/Nova-E-1)
21
 
22
  ---
23
 
 
37
  The primary model and its smaller variant are hosted on Hugging Face.
38
 
39
  **Main Model (22B)**
40
+ * **Repository (Full/8-bit)**: [`DreamhubAI/Nova-E-1`](https://huggingface.co/DreamhubAI/Nova-E-1)
41
  * **Hugging Face Transformers** is the main framework for loading and using these models.
42
 
43
  **Smaller Variant**
44
+ * **Lightweight Model**: [`DreamhubAI/nova-e-mini`](https://huggingface.co/DreamhubAI/nova-e-mini) - A smaller version for quick testing and less demanding tasks.
45
 
46
  ### System Requirements
47
  **For GPU Execution**
 
58
  ```python
59
  from transformers import AutoModelForCausalLM, AutoTokenizer
60
 
61
+ model_name = "DreamhubAI/Nova-E-1" # or "DreamhubAI/Nova-E-1-8bit"
62
  tokenizer = AutoTokenizer.from_pretrained(model_name)
63
  model = AutoModelForCausalLM.from_pretrained(model_name)
64
 
 
70
 
71
  ## 🛠️ Projects and Ecosystem
72
 
73
+ Nova•E User Interface (Web UI)
74
 
75
  · A dedicated interface for interacting with the model is available on GitHub.
76
  · Repository: DreamhubAI/WALL-E
 
89
 
90
  ---
91
 
92
+ **Thank you for your interest in Nova•E¹! We hope this model serves as a powerful tool for your development and research projects in coding, reasoning, and mathematics.**