Binary
Classifying text sequences as biased/fair.
Last updated
Classifying text sequences as biased/fair.
Last updated
Binary classification is the foundation of many bias detection frameworks, and in this case refers to classifying an entire text sequence as "biased" or "unbiased."
This is typically implemented with an encoder-only model, such as BERT, to create encodings (i.e. contextual representations) that capture "the meaning" of a sentence, and can be passed to a classifier layer(s) with one output feature (for 0 to 1 probability of a single class: "Biased").
One of the UnBIAS findings is that ternary classification (see Multi-Class) is a stronger approach, but the binary classification model is just as good.
UnBIAS is a framework started in 2023 by Raza. et al at the Vector Institute, and a refresh of the technology proposed in Dbias.
Base Model: bert-base-uncased
Dataset: BEAD (3.67M rows)
📄 Research Paper
3.67M rows | 2024
The BEADs corpus was gathered from the datasets: MBIC, Hyperpartisan news, Toxic comment classification, Jigsaw Unintended Bias, Age Bias, Multi-dimensional news (Ukraine), Social biases.
It was annotated by humans, then with semi-supervised learning, and finally human verified.
It's one of the largest and most up-to-date datasets for bias and toxicity classification, though it's currently private so you'll need to request access through HuggingFace.
🤗HuggingFace Dataset
📑 Contents
While BEADs doesn't have a binary label for bias, the ternary labels (e.g. neutral, slightly biased, and highly biased) of the label field can categorized into biased (1), or unbiased (0). Additionally, the toxicity field contains binary labels.
📄 Research Paper
Train your own binary classification model: 📝 Blog post - 💻 Training .ipynb
BERT (and other encoder models) process an input sequence into a encoding sequence as shown in the figure below, where self-attention heads encode the contextual words' meaning into each token representation.
These encodings are the foundation of many NLP tasks, and it's common (in BERT sequence classification) to then classify the CLS encoding into the desired classes (e.g. Biased, Neutral).
The CLS token (pooler_output) is a built in pooling mechanism, but you can also use your own pooling mechanism (e.g. averaging all the representations for a mean-pooled representation).
bert-base-uncased
has 768 output features (for each token) and we can pass the CLS token into a (768 -> 1) dense layer.
This output logit of that classification head is activated (typically with a sigmoid or softmax function), for a probability that falls between 0-1.
A threshold is sometimes applied to the output (e.g. probability > 0.5 is "Biased").
Metrics:
When evaluating models' performance at binary classification, you should try to understand the way positive (biased), negative (neutral) fall into the categories: correct (true) predictions, and incorrect (false) predictions.
Your individual requirements will guide your interpretation (e.g. maybe you REALLY want to avoid false positives).
Confusion Matrix: Used to visualize the levels of correct and incorrect classifications made, the goal
Precision:
Recall:
F1 Score:
HF Space to Test UnBIAS Classifier
Fields | Description |
---|---|
Field | Description |
---|---|
Fields | Description |
---|---|
text
The sentence or sentence fragment.
dimension
Descriptive category of the text.
biased_words
A compilation of words regarded as biased.
aspect
Specific sub-topic within the main content.
label
Indicates the presence (True) or absence (False) of bias. The label is ternary - highly biased, slightly biased, and neutral.
toxicity
Indicates the presence (True) or absence (False) of toxicity.
identity_mention
Mention of any identity based on words match.
biased_text
The full text fragment where bias is detected.
racial
Binary label, presence (1) or absence (0) of racial bias.
religious
Binary label, presence (1) or absence (0) of religious bias.
gender
Binary label, presence (1) or absence (0) of gender bias.
age
Binary label, presence (1) or absence (0) of age bias.
nationality
Binary label, presence (1) or absence (0) of nationality bias.
sexuality
Binary label, presence (1) or absence (0) of sexuality bias.
socioeconomic
Binary label, presence (1) or absence (0) of socioeconomic bias.
educational
Binary label, presence (1) or absence (0) of educational bias.
disability
Binary label, presence (1) or absence (0) of disability bias.
political
Binary label, presence (1) or absence (0) of political bias.
sentiment
The sentiment given to Mistral 7B in the prompt.
target_group
The group Mistral7B was told to prompt.
statement_type
Type of bias prompted (e.g. "stereotypes," "discriminatory language," "false assumptions," "offensive language," "unfair generalizations").
text
The text fragment (few sentences or less).
outlet
The source of the text fragments.
label
0 or 1 (biased or unbiased).
topic
The subject of the text fragment.
news_link
URL to the original source.
biased_words
Full words contributing to bias, in a list.
type
Political sentiment (if applicable).