Welcome to The Fair-ly Project
A directory and implementation of SOTA bias detection research papers, all in one place.
Last updated
A directory and implementation of SOTA bias detection research papers, all in one place.
Last updated
The Fair-ly Project was founded and is maintained by ML researchers of Ethical Spectacle Research and The Vector Institute, who have published ground-breaking bias detection papers such as Dbias ('22), Nbias ('23), and GUS-Net ('24).
The Fair-ly Project is an open-source collection of resources, such as:
Research papers
Blogs and videos
Datasets and models
Ethos: Bias-detection research should be accessible to users and developers of all levels.
Here are a few tools we built for using state-of-the-art models in practice, something for everyone :).
Our Chrome extension, Fair-ly, is a showcase of SOTA models. Anyone can run a bias analysis pipeline/dashboard on their webpage, no code required.
It was created to open-the-door for new people to bias detection technology, by demonstrating it's strengths and weaknesses. The tasks it's intended to preform are:
Binary bias classification (sentence -> biased/fair).
Bias aspect classification (sentence -> gender bias, racial bias, ...).
Token classification of generalizations, unfairness, and stereotypes.
Try this interactive demo for a quick look:
TextAnalyzer
Docs
MultimodalAnalyzer
Docs
Binary Classification API
Types-of-bias Classification API
GUS-Net (Token Classification) API
Recent Papers
Papers to cite ;)
Binary Classification
Classifying text sequences as "Biased" or "Fair."
Multi-Class Classification
Classifying text sequences into more specific classes.
Named-Entity Recognition
Classifying tokens (words) that contain bias.
Multimodal Classification
Classifying image and text pairs for bias.
Discord
Ask questions or share a project.