vijayarulmuthu commited on
Commit
c1e6a88
·
verified ·
1 Parent(s): c5c35b6

vijayarulmuthu/llama381binstruct_summarize_short

Browse files
README.md CHANGED
@@ -20,14 +20,14 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="VijayAgnel/llama381binstruct_summarize_short", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vijayarulmuthu-skyhigh-security/huggingface/runs/7dmmdy7h)
31
 
32
 
33
  This model was trained with SFT.
 
20
  from transformers import pipeline
21
 
22
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="vijayarulmuthu/llama381binstruct_summarize_short", device="cuda")
24
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
  print(output["generated_text"])
26
  ```
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vijayarulmuthu-skyhigh-security/huggingface/runs/fm8xf6wl)
31
 
32
 
33
  This model was trained with SFT.
adapter_config.json CHANGED
@@ -24,13 +24,13 @@
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
27
- "q_proj",
28
- "v_proj",
29
  "k_proj",
30
- "down_proj",
31
  "up_proj",
 
 
 
32
  "gate_proj",
33
- "o_proj"
34
  ],
35
  "task_type": "CAUSAL_LM",
36
  "trainable_token_indices": null,
 
24
  "rank_pattern": {},
25
  "revision": null,
26
  "target_modules": [
 
 
27
  "k_proj",
 
28
  "up_proj",
29
+ "q_proj",
30
+ "v_proj",
31
+ "o_proj",
32
  "gate_proj",
33
+ "down_proj"
34
  ],
35
  "task_type": "CAUSAL_LM",
36
  "trainable_token_indices": null,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b878a2c5a6c1256f899cc9958cb3fb09416128ea9bbc64c097db62d66cf3c530
3
  size 167832240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6dcd9ef2e71b8f6455eaa086d58d13fb4d66e3e9d40b2a199cca6708fc69a93
3
  size 167832240
runs/Mar25_01-29-17_c28298cabeeb/events.out.tfevents.1742866162.c28298cabeeb.445.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:deb379360d24b179f5e2d520da26f156867bf59336732d3b0a902b942535ca35
3
+ size 29690
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e203cb55a189a63d5c55efc8245e20fa43c945a64112ba95b1b18fbf3f520916
3
  size 5688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90d2b16486e2b0c30480a9e23710313fba76c70913dce977a7fb5d3010aa8545
3
  size 5688