🔬
The Fair-ly Project
  • Welcome to RumorMill
    • Recent Papers Timeline
  • Fair-ly Toolkit
    • Chrome Extension
    • Python Package
      • TextAnalyzer Pipeline
      • MultimodalAnalyzer Pipeline
    • Hosted APIs
  • Research
    • Sequence Classification
      • Binary
      • Multi-class
    • Named-Entity Recognition
      • Token Classification
    • Multimodal
      • Image + Text Pair Classification
    • Datasets
      • News Media Bias Plus (2024)
      • BEADs Dataset (2024)
      • GUS Dataset (2024)
      • BABE Dataset (2022)
  • Learn
    • Blog Posts
      • Training a model for multi-label NER
      • Binary Classification w/ BERT
  • Join the Project
    • To Do List
    • Discord Server
    • GitHub Repo
  • Misc
    • Privacy Policy
Powered by GitBook
On this page
  • Overview of Task:
  • 🤖 Models:
  • 💾 Datasets:
  • How it Works:
Edit on GitHub
  1. Research
  2. Sequence Classification

Binary

Classifying text sequences as biased/fair.

PreviousSequence ClassificationNextMulti-class

Last updated 7 months ago

Overview of Task:

Binary classification is the foundation of many bias detection frameworks, and in this case refers to classifying an entire text sequence as "biased" or "unbiased."

This is typically implemented with an encoder-only model, such as , to create encodings (i.e. contextual representations) that capture "the meaning" of a sentence, and can be passed to a classifier layer(s) with one output feature (for 0 to 1 probability of a single class: "Biased").


🤖 Models:

UnBIAS Classifier

UnBIAS is a framework started in 2023 by Raza. et al at the Vector Institute, and a refresh of the technology proposed in Dbias.

🤗 Hugging Face Model

📄 Research Paper

Use UnBIAS Classifier:

# pip install transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("newsmediabias/UnBIAS-classification-bert")
model = AutoModelForSequenceClassification.from_pretrained("newsmediabias/UnBIAS-classification-bert")

classifier = pipeline("text-classification", model=model, tokenizer=tokenizer , device=0 if device.type == "cuda" else -1)
classifier("Anyone can excel at coding.")

Dbias

While reimplementations have made changes in approach, Dbias was a trailblazer, especially for binary classification (the first phase in the image above).

🤗Hugging Face Model

📄 Research Paper

Use Dbias Bias Classification:

# pip install Dbias
# pip install https://huggingface.co/d4data/en_pipeline/resolve/main/en_pipeline-any-py3-none-any.whl
from Dbias.bias_classification import *

# returns classification label for a given sentence fragment.
classifier("Tall people are so clumsy.")

💾 Datasets:

Bias Evaluation Across Domains (BEADs) Dataset

3.67M rows | 2024

It was annotated by humans, then with semi-supervised learning, and finally human verified.

It's one of the largest and most up-to-date datasets for bias and toxicity classification, though it's currently private so you'll need to request access through HuggingFace.

🤗HuggingFace Dataset

📑 Contents

Fields
Description

text

The sentence or sentence fragment.

dimension

Descriptive category of the text.

biased_words

A compilation of words regarded as biased.

aspect

Specific sub-topic within the main content.

label

Indicates the presence (True) or absence (False) of bias. The label is ternary - highly biased, slightly biased, and neutral.

toxicity

Indicates the presence (True) or absence (False) of toxicity.

identity_mention

Mention of any identity based on words match.

While BEADs doesn't have a binary label for bias, the ternary labels (e.g. neutral, slightly biased, and highly biased) of the label field can categorized into biased (1), or unbiased (0). Additionally, the toxicity field contains binary labels.

📄 Research Paper

Generalizations, Unfairness, and Stereotypes Dataset (Synthetic Corpus)

37.5k rows | 2024

🤗HuggingFace Dataset

📑 Contents

Field
Description

biased_text

The full text fragment where bias is detected.

racial

Binary label, presence (1) or absence (0) of racial bias.

religious

Binary label, presence (1) or absence (0) of religious bias.

gender

Binary label, presence (1) or absence (0) of gender bias.

age

Binary label, presence (1) or absence (0) of age bias.

nationality

Binary label, presence (1) or absence (0) of nationality bias.

sexuality

Binary label, presence (1) or absence (0) of sexuality bias.

socioeconomic

Binary label, presence (1) or absence (0) of socioeconomic bias.

educational

Binary label, presence (1) or absence (0) of educational bias.

disability

Binary label, presence (1) or absence (0) of disability bias.

political

Binary label, presence (1) or absence (0) of political bias.

sentiment

The sentiment given to Mistral 7B in the prompt.

target_group

The group Mistral7B was told to prompt.

statement_type

Type of bias prompted (e.g. "stereotypes," "discriminatory language," "false assumptions," "offensive language," "unfair generalizations").

Mistral 7B was prompted to generate biased sentences, using the arguments in the table above. This means the entire thing should be biased. You may want to supplement this with a dataset of fair statements, and label all the sentences from this dataset with 1 (biased).

📄 Research Paper

Bias Annotations By Experts (BABE)

4.12k records | 2023

Human annotated, and all annotators must agree. In its paper, BABE showed great results with BERT for sequence classification of news articles. While smaller than some other datasets, the annotations are very reliable (highly recommended as an external dataset for model eval).

🤗HuggingFace Dataset

📑 Contents

Fields
Description

text

The text fragment (few sentences or less).

outlet

The source of the text fragments.

label

0 or 1 (biased or unbiased).

topic

The subject of the text fragment.

news_link

URL to the original source.

biased_words

Full words contributing to bias, in a list.

type

Political sentiment (if applicable).

📄 Research Paper

Not added yet

Not added yet

Not added yet


How it Works:

  1. BERT (and other encoder models) process an input sequence into a encoding sequence as shown in the figure below, where self-attention heads encode the contextual words' meaning into each token representation.

  2. These encodings are the foundation of many NLP tasks, and it's common (in BERT sequence classification) to then classify the CLS encoding into the desired classes (e.g. Biased, Neutral).

    1. The CLS token (pooler_output) is a built in pooling mechanism, but you can also use your own pooling mechanism (e.g. averaging all the representations for a mean-pooled representation).

  3. bert-base-uncased has 768 output features (for each token) and we can pass the CLS token into a (768 -> 1) dense layer.

    1. This output logit of that classification head is activated (typically with a sigmoid or softmax function), for a probability that falls between 0-1.

  4. A threshold is sometimes applied to the output (e.g. probability > 0.5 is "Biased").

Metrics:

When evaluating models' performance at binary classification, you should try to understand the way positive (biased), negative (neutral) fall into the categories: correct (true) predictions, and incorrect (false) predictions.

Your individual requirements will guide your interpretation (e.g. maybe you REALLY want to avoid false positives).

  • Confusion Matrix: Used to visualize the levels of correct and incorrect classifications made, the goal

  • Precision: TPTP+FP\frac{TP}{TP + FP}TP+FPTP​

  • Recall: TPTP+FN\frac{TP}{TP + FN}TP+FNTP​

  • F1 Score: 2×precision×recallprecision+recall2 \times \frac{precision \times recall}{precision + recall}2×precision+recallprecision×recall​


One of the UnBIAS findings is that ternary classification (see ) is a stronger approach, but the binary classification model is just as good.

HF Space to

Base Model: bert-base-uncased Dataset: (3.67M rows)

Dbias proposed an architecture in 2022, for addressing news media bias, with a framework that utilized binary classification, , bias masking, and word recommendation (Raza, et al.).

Base Model: bert-base-uncased, Dataset:

Dbias has a .

This is running list of cool binary classification models we've seen and want to learn more about. If you find one that should be here, send it to us on .

The BEADs corpus was gathered from the datasets: , , , , , , .

The GUS dataset (released in the ), is an entirely synthetic dataset. It was generated by Mistral 7B, and later used for named-entity recognition. The results of GUS-Net showed that the synthetic corpus was effective across domains, and less noise than authentic datasets.

Train your own binary classification model: 📝 -

Multi-Class
🤗
Test UnBIAS Classifier
BEAD
named-entity recognition
MBAD Dataset
PyPI package
discord
MBIC
Hyperpartisan news
Toxic comment classification
Jigsaw Unintended Bias
Age Bias
Multi-dimensional news (Ukraine)
Social biases
GUS-Net paper
Blog post
💻 Training .ipynb
BERT
newsmediabias/UnBIAS-classifier · Hugging Facehuggingface
Dbias: Detecting biases and ensuring Fairness in news articlesarXiv.org
Dbias: Detecting biases and ensuring Fairness in news articlesarXiv.org
Logo
newsmediabias/news-bias-full-data · Datasets at Hugging Facehuggingface
Logo
Navigating News Narratives: A Media Bias Analysis DatasetarXiv.org
ethical-spectacle/biased-corpus · Datasets at Hugging Facehuggingface
Logo
GUS-Net: Social Bias Classification in Text with Generalizations,...arXiv.org
mediabiasgroup/BABE · Datasets at Hugging Facehuggingface
Neural Media Bias Detection Using Distant Supervision With BABE --...arXiv.org
Logo
Logo
Logo
Logo
Logo
Logo
Intended tasks for the BEADs dataset
d4data/bias-detection-model · Hugging Facehuggingface
Fake/biased news binary classifier.
Logo
POLLCHECK/BERT-classifier · Hugging Facehuggingface
POLLCHECK/BERT-classifier · Hugging Facehuggingface
Logo
Logo
valurank/distilroberta-bias · Hugging Facehuggingface
Passes the "looks good to me" test with flying colors. Also has a on hf.
quantized version
Logo