GreatCaptainNemo nielsr HF Staff commited on
Commit
e6398f4
·
verified ·
1 Parent(s): c7adc64

Add `library_name` and `pipeline_tag` to model card (#7)

Browse files

- Add `library_name` and `pipeline_tag` to model card (b729aeaf0aefae2803eeae8b0844e90f08f16825)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
 
4
  # ProLLaMA: A Protein Large Language Model for Multi-Task Protein Language Processing
5
 
6
  [Paper on arxiv](https://arxiv.org/abs/2402.16445) for more information
@@ -106,7 +109,8 @@ if __name__ == '__main__':
106
  s = generation_output[0]
107
  output = tokenizer.decode(s,skip_special_tokens=True)
108
  print("Output:",output)
109
- print("\n")
 
110
  else:
111
  outputs=[]
112
  with open(args.input_file, 'r') as f:
@@ -126,7 +130,8 @@ if __name__ == '__main__':
126
  output = tokenizer.decode(s,skip_special_tokens=True)
127
  outputs.append(output)
128
  with open(args.output_file,'w') as f:
129
- f.write("\n".join(outputs))
 
130
  print("All the outputs have been saved in",args.output_file)
131
  ```
132
 
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
  ---
6
+
7
  # ProLLaMA: A Protein Large Language Model for Multi-Task Protein Language Processing
8
 
9
  [Paper on arxiv](https://arxiv.org/abs/2402.16445) for more information
 
109
  s = generation_output[0]
110
  output = tokenizer.decode(s,skip_special_tokens=True)
111
  print("Output:",output)
112
+ print("
113
+ ")
114
  else:
115
  outputs=[]
116
  with open(args.input_file, 'r') as f:
 
130
  output = tokenizer.decode(s,skip_special_tokens=True)
131
  outputs.append(output)
132
  with open(args.output_file,'w') as f:
133
+ f.write("
134
+ ".join(outputs))
135
  print("All the outputs have been saved in",args.output_file)
136
  ```
137