Commit
·
dc3468d
1
Parent(s):
4cb78ba
Update README.md
Browse files
README.md
CHANGED
|
@@ -19,9 +19,18 @@ model-index:
|
|
| 19 |
name: Silly-Machine/TuPyE-Dataset
|
| 20 |
type: Silly-Machine/TuPyE-Dataset
|
| 21 |
metrics:
|
| 22 |
-
-
|
| 23 |
-
|
| 24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
source:
|
| 26 |
name: Open LLM Leaderboard
|
| 27 |
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
|
|
@@ -30,7 +39,7 @@ model-index:
|
|
| 30 |
## Introduction
|
| 31 |
|
| 32 |
|
| 33 |
-
|
| 34 |
For more details or specific inquiries, please refer to the [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/).
|
| 35 |
|
| 36 |
The efficacy of Language Models can exhibit notable variations when confronted with a shift in domain between training and test data. In the creation of a specialized Portuguese Language Model tailored for hate speech classification, the original BERTimbau model underwent fine-tuning processe carried out on the [TuPi Hate Speech DataSet](https://huggingface.co/datasets/FpOliveira/TuPi-Portuguese-Hate-Speech-Dataset-Binary), sourced from diverse social networks.
|
|
@@ -44,7 +53,7 @@ The efficacy of Language Models can exhibit notable variations when confronted w
|
|
| 44 |
| `Silly-Machine/TuPy-Bert-Base-Multilabel` | BERT-Base | 12 | 109M |
|
| 45 |
| `Silly-Machine/TuPy-Bert-Large-Multilabel` | BERT-Large | 24 | 334M |
|
| 46 |
|
| 47 |
-
## Example usage
|
| 48 |
|
| 49 |
```python
|
| 50 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
|
|
|
|
| 19 |
name: Silly-Machine/TuPyE-Dataset
|
| 20 |
type: Silly-Machine/TuPyE-Dataset
|
| 21 |
metrics:
|
| 22 |
+
- type: accuracy
|
| 23 |
+
value: 0.8
|
| 24 |
+
name: Accuracy
|
| 25 |
+
verified: true
|
| 26 |
+
- type: f1
|
| 27 |
+
value: 0.759
|
| 28 |
+
name: F1
|
| 29 |
+
verified: true
|
| 30 |
+
- type: roc_auc
|
| 31 |
+
value: 0.891
|
| 32 |
+
name: AUC
|
| 33 |
+
verified: true
|
| 34 |
source:
|
| 35 |
name: Open LLM Leaderboard
|
| 36 |
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
|
|
|
|
| 39 |
## Introduction
|
| 40 |
|
| 41 |
|
| 42 |
+
Tupy-BERT-Base is a fine-tuned BERT model designed specifically for binary classification of hate speech in Portuguese. Derived from the [BERTimbau base](https://huggingface.co/neuralmind/bert-base-portuguese-cased), TuPy-Base is a refined solution for addressing hate speech concerns.
|
| 43 |
For more details or specific inquiries, please refer to the [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/).
|
| 44 |
|
| 45 |
The efficacy of Language Models can exhibit notable variations when confronted with a shift in domain between training and test data. In the creation of a specialized Portuguese Language Model tailored for hate speech classification, the original BERTimbau model underwent fine-tuning processe carried out on the [TuPi Hate Speech DataSet](https://huggingface.co/datasets/FpOliveira/TuPi-Portuguese-Hate-Speech-Dataset-Binary), sourced from diverse social networks.
|
|
|
|
| 53 |
| `Silly-Machine/TuPy-Bert-Base-Multilabel` | BERT-Base | 12 | 109M |
|
| 54 |
| `Silly-Machine/TuPy-Bert-Large-Multilabel` | BERT-Large | 24 | 334M |
|
| 55 |
|
| 56 |
+
## Example usage
|
| 57 |
|
| 58 |
```python
|
| 59 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
|