Question Answering
Transformers
PyTorch
JAX
Safetensors
English
bert
bert-base
Eval Results (legacy)
Instructions to use csarron/bert-base-uncased-squad-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use csarron/bert-base-uncased-squad-v1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="csarron/bert-base-uncased-squad-v1")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("csarron/bert-base-uncased-squad-v1") model = AutoModelForQuestionAnswering.from_pretrained("csarron/bert-base-uncased-squad-v1") - Inference
- Notebooks
- Google Colab
- Kaggle
Add evaluation results on the plain_text config and validation split of squad
#1
by autoevaluator HF Staff - opened
Beep boop, I am a bot from Hugging Face's automatic model evaluator 👋!
Your model has been evaluated on the plain_text config and validation split of the squad dataset by @nbroad , using the predictions stored here.
Accept this pull request to see the results displayed on the Hub leaderboard.
Evaluate your model on more datasets here.
csarron changed pull request status to merged