Welcome to RumorMill
A directory and implementation of SOTA bias detection research papers, all in one place.
Last updated
A directory and implementation of SOTA bias detection research papers, all in one place.
Last updated
RumorMill was founded and is maintained by ML researchers of Ethical Spectacle Research and The Vector Institute.
RumorMill is an open-source collection of resources, such as:
Research papers
Blogs and videos
Datasets and models
Ethos: Bias-detection research should be accessible to users and developers of all levels.
Here are a few tools we built for using state-of-the-art models in practice, something for everyone :).
Our Chrome extension, Fair-ly, is a showcase of SOTA models. Anyone can run a bias analysis pipeline/dashboard on their webpage, no code required.
It was created to open-the-door for new people to bias detection technology, by demonstrating it's strengths and weaknesses. The tasks it's intended to preform are:
Binary bias classification (sentence -> biased/fair).
Bias aspect classification (sentence -> gender bias, racial bias, ...).
Token classification of generalizations, unfairness, and stereotypes.
Try this interactive demo for a quick look:
TextAnalyzer
Docs
MultimodalAnalyzer
Docs
Binary Classification API
Types-of-bias Classification API
GUS-Net (Token Classification) API
Discord
Ask questions or share a project.
Recent Papers
Papers to cite ;)
Binary Classification
Classifying text sequences as "Biased" or "Fair."
Multi-Class Classification
Classifying text sequences into more specific classes.
Named-Entity Recognition
Classifying tokens (words) that contain bias.
Multimodal Classification
Classifying image and text pairs for bias.