Question Answering
Transformers
Safetensors
Russian
t5
text2text-generation
T5
russian
text-generation
text-generation-inference
Instructions to use r1char9/ruT5_q_a with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use r1char9/ruT5_q_a with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="r1char9/ruT5_q_a")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("r1char9/ruT5_q_a") model = AutoModelForSeq2SeqLM.from_pretrained("r1char9/ruT5_q_a") - Notebooks
- Google Colab
- Kaggle
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("r1char9/ruT5_q_a")
model = AutoModelForSeq2SeqLM.from_pretrained("r1char9/ruT5_q_a")Quick Links
Модель ruT5-base была fine-tuned для задачи question answer, предназначенная для Russian текст.
Uses
from transformers import AutoTokenizer, T5ForConditionalGeneration
qa_checkpoint = 'r1char9/ruT5_q_a'
qa_model = T5ForConditionalGeneration.from_pretrained(qa_checkpoint)
qa_tokenizer = AutoTokenizer.from_pretrained(qa_checkpoint)
prompt='Нарисуй изображение Томаса Шелби'
def question_answering(prompt):
question = "Что нужно нарисовать?"
tokenized_sentence = qa_tokenizer(prompt, question, return_tensors='pt')
res = qa_model.generate(**tokenized_sentence)
decoded_res = qa_tokenizer.decode(res[0], skip_special_tokens=True)
return decoded_res
prompt = question_answering(prompt)
# 'изображение Томаса Шелби'
- Downloads last month
- 19
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="r1char9/ruT5_q_a")