Multi-class
Classifying text sequences into one of multiple categories of bias.
Overview of Task
Multi-class classification is very similar to binary sequence classification, except with multiple classes, of which the sequence can be classified into only one.
In the context of bias classification, this could be used for ternary classification (e.g. neutral, slightly biased, highly biased) or categorizations (e.g. gender, racial, religious, etc.).
Like binary classification, multi-class classification is typically implemented with an encoder-only model, such as BERT, to create encodings (i.e. contextual representations) that capture "the meaning" of a sentence. These can be passed to a classification head with multiple output features, each with a probability (the sum of the probabilities is 1).

Note: "Multi-class" classification is different from "multi-label" classification, where the text sequence can fall into more than one class at a time. The activation function you choose will determine the distribution of probabilities. For example, softmax activations will sum to 1, whereas sigmoid outputs will assign each class a score of 0-1 (and multiple classes may be within your threshold).
🤖 Models:
fairlyAspects
The fairlyAspects model was trained on the GUS synthetic corpus, which contains multi-label "type of bias" annotations from the generation process.
In the Chrome Extension, this model is used to categorize sentences that have already been classified as biased (e.g. gender, educational, etc.).
Though it wasn't released as part of a paper, the GUS dataset was studied in the GUS-Net paper, and found to have effective coverage across domains.
Base Model: bert-base-uncased Dataset: GUS Synthetic Corpus (37.5k rows)
🤗Hugging Face Model
Use fairlyAspects
# pip install transformers
from transformers import pipeline
classifier = pipeline("text-classification", model="maximuspowers/bias-type-classifier")
result = classifier("Tall people are so clumsy.") # function_to_appy="sigmoid", top_k=11 for multilabelUnBIAS is a framework started in 2023 by Raza. et al at the Vector Institute, and a refresh of the technology proposed in Dbias. This model is the star of the models trained on the BEADs dataset, trained on 3.67M sentence fragments classified into: Neutral, Slightly Biased, and Highly Biased.
Base Model: bert-base-uncased Dataset: BEADs (3.67M rows)
🤗Hugging Face Model
📄 Research Paper
Use UnBIAS Ternary Classifier:
# pip install transformers
from transformers import pipeline
classifier = pipeline("text-classification", model="newsmediabias/UnBIAS-classifier")
result = classifier("Tall people are so clumsy.")💾 Datasets:
Bias Evaluation Across Domains (BEADs) Dataset
3.67M rows | 2024
The BEADs corpus was gathered from the datasets: MBIC, Hyperpartisan news, Toxic comment classification, Jigsaw Unintended Bias, Age Bias, Multi-dimensional news (Ukraine), Social biases.
It was annotated by humans, then with semi-supervised learning, and finally human verified.
It's one of the largest and most up-to-date datasets for bias and toxicity classification, though it's currently private so you'll need to request access through HuggingFace.
🤗Hugging Face Dataset (request access)
📑 Contents
text
The sentence or sentence fragment.
dimension
Descriptive category of the text.
biased_words
A compilation of words regarded as biased.
aspect
Specific sub-topic within the main content.
label
Indicates the presence (True) or absence (False) of bias. The label is ternary - highly biased, slightly biased, and neutral.
toxicity
Indicates the presence (True) or absence (False) of toxicity.
identity_mention
Mention of any identity based on words match.
While BEADs doesn't have a binary label for bias, the ternary labels (e.g. neutral, slightly biased, and highly biased) of the label field can categorized into biased (1), or unbiased (0). Additionally, the toxicity field contains binary labels.
📄 Research Paper

Generalizations, Unfairness, and Stereotypes Synthetic Corpus
37.5k rows | 2024
The GUS dataset (released in the GUS-Net paper), is an entirely synthetic dataset. It was generated by Mistral 7B, and later used for named-entity recognition. The results of GUS-Net showed that the synthetic corpus was effective across domains. and contained less noise than authentic datasets.
🤗Hugging Face Dataset
📑 Contents
biased_text
The full text fragment where bias is detected.
racial
Binary label, presence (1) or absence (0) of racial bias.
religious
Binary label, presence (1) or absence (0) of religious bias.
gender
Binary label, presence (1) or absence (0) of gender bias.
age
Binary label, presence (1) or absence (0) of age bias.
nationality
Binary label, presence (1) or absence (0) of nationality bias.
sexuality
Binary label, presence (1) or absence (0) of sexuality bias.
socioeconomic
Binary label, presence (1) or absence (0) of socioeconomic bias.
educational
Binary label, presence (1) or absence (0) of educational bias.
disability
Binary label, presence (1) or absence (0) of disability bias.
political
Binary label, presence (1) or absence (0) of political bias.
sentiment
The sentiment given to Mistral 7B in the prompt.
target_group
The group Mistral7B was told to prompt.
statement_type
Type of bias prompted (e.g. "stereotypes," "discriminatory language," "false assumptions," "offensive language," "unfair generalizations").
Mistral 7B was prompted to generate biased sentences, using the arguments in the table above. This means all sentences are intended to be biased. You may want to supplement the dataset with fair statements (with the same labels), if you're using it on unbiased text fragments.
📄 Research Paper


Not added yet
How it Works:

BERT (and other encoder models) process an input sequence into a encoding sequence as shown in the figure above, where self-attention heads encode the contextual words' meaning into each token representation.
These encodings are the foundation of many NLP tasks, and it's common (in BERT sequence classification) to then classify the CLS encoding into the desired classes (e.g. Neutral, Slightly Biased, Highly Biased).
The CLS token (pooler_output) is a built in pooling mechanism, but you can also use your own pooling mechanism (e.g. averaging all the representations for a mean-pooled representation).
bert-base-uncasedhas 768 output features (for each token) and we can pass the CLS token into a (768 -> n) dense layer for multi-class or multi-label classification (where "n" is the number of classes).The activation function used (e.g. softmax for multi-class, sigmoid for multi-label, etc.) turn the output logits for each of those classes, into a probability for each one.
Data engineers will usually set a threshold where the probability gets counted as a presence (can be ubiquitous or individually calcuated for each class).
Metrics:
When evaluating models' performance at binary classification, you should try to understand the way positive (biased), negative (neutral) fall into the categories: correct (true) predictions, and incorrect (false) predictions.
Your individual requirements will guide your interpretation (e.g. maybe you REALLY want to avoid false positives).
Confusion Matrix: Used to visualize the levels of correct and incorrect classifications made, the goal

Precision:
Recall:
F1 Score:
Last updated