Text Classification
Transformers
PyTorch
multilingual
English
distilbert
sentiment-analysis
testing
unit tests
Instructions to use dhpollack/distilbert-dummy-sentiment with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use dhpollack/distilbert-dummy-sentiment with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="dhpollack/distilbert-dummy-sentiment")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("dhpollack/distilbert-dummy-sentiment") model = AutoModelForSequenceClassification.from_pretrained("dhpollack/distilbert-dummy-sentiment") - Notebooks
- Google Colab
- Kaggle
DistilBert Dummy Sentiment Model
Purpose
This is a dummy model that can be used for testing the transformers pipeline with the task sentiment-analysis. It should always give random results (i.e. {"label": "negative", "score": 0.5}).
How to use
classifier = pipeline("sentiment-analysis", "dhpollack/distilbert-dummy-sentiment")
results = classifier(["this is a test", "another test"])
Notes
This was created as follows:
- Create a vocab.txt file (in /tmp/vocab.txt in this example).
[UNK]
[SEP]
[PAD]
[CLS]
[MASK]
- Open a python shell:
import transformers
config = transformers.DistilBertConfig(vocab_size=5, n_layers=1, n_heads=1, dim=1, hidden_dim=4 * 1, num_labels=2, id2label={0: "negative", 1: "positive"}, label2id={"negative": 0, "positive": 1})
model = transformers.DistilBertForSequenceClassification(config)
tokenizer = transformers.DistilBertTokenizer("/tmp/vocab.txt", model_max_length=512)
config.save_pretrained(".")
model.save_pretrained(".")
tokenizer.save_pretrained(".")
- Downloads last month
- 1,040