Welcome to RumorMill
A directory and implementation of SOTA bias detection research papers, all in one place.
RumorMill is an open-source collection of resources, such as:
Research papers
Blogs and videos
Datasets and models
Ethos: Bias-detection research should be accessible to users and developers of all levels.
🛠️ RumorMill Toolkit
Here are a few tools we built for using state-of-the-art models in practice, something for everyone :).
Fair-ly Extension (Pending Renaming)
Our Chrome extension, Fair-ly, is a showcase of SOTA models. Anyone can run a bias analysis pipeline/dashboard on their webpage, no code required.
It was created to open-the-door for new people to bias detection technology, by demonstrating it's strengths and weaknesses. The tasks it's intended to preform are:
Binary bias classification (sentence -> biased/fair).
Bias aspect classification (sentence -> gender bias, racial bias, ...).
Token classification of generalizations, unfairness, and stereotypes.
Try this interactive demo for a quick look:
Install Our Package:
pip install the-fairly-projectHow to Use The Pipeline:
from fairly import TextAnalyzer
analyzer = TextAnalyzer(bias="ternary", classes=True, top_k_classes=3, ner="gus")
result = analyzer.analyze("Tall people are so clumsy.")Example Response:
{
'text': {
'text': 'Tall people are so clumsy.',
'label': 'Slightly Biased',
'score': 0.6829080581665039,
'aspects': {
'physical': 0.9650779366493225,
'gender': 0.024978743866086006,
'socioeconomic': 0.023334791883826256
}
},
'ner': [
{'token': 'tall', 'labels': ['B-STEREO', 'B-GEN', 'B-UNFAIR']},
{'token': 'people', 'labels': ['I-STEREO', 'I-GEN', 'I-UNFAIR']},
{'token': 'are', 'labels': ['I-STEREO']},
{'token': 'so', 'labels': ['I-STEREO']},
{'token': 'clumsy', 'labels': ['I-STEREO', 'B-UNFAIR', 'I-UNFAIR']},
{'token': '.', 'labels': ['I-STEREO', 'I-UNFAIR']}
]
}🧠 Learn
Last updated