TextAnalyzer Pipeline
Python module for using NLP bias analysis models.
1. Import
# pip install the-fairly-project
from fairly import TextAnalyzer2. Initialize the Module
text_pipeline = TextAnalyzer(
bias="ternary", # defaults to None
classes=True, # defaults to False
top_k_classes=3, # defaults to 3
ner="gus"
)Customize your pipeline
bias
Classify bias at the sentence level (sequence classification):
None: Default (no bias sequence classification)."binary": Not implemented yet. (e.g. Fair, Biased) More info.
When set to ternary this adds two fields to the "text" dictionary in the return dictionary: label (as shown above) and score (0-1).
classes
Classify the types of bias a sentence contains:
False: Default (no bias aspects classification).True: Uses fairlyAspects (11 classes). More info.
When set to True, this adds one field to the "text" dictionary in the return dictionary: aspects(which contains top_k_classes in [CLASS]: [SCORE] format).
top_k_classes
Number of classes returned in the aspects dict.
Int:
1to11(defaults to3).
Only relevant when classes is set to True.
ner
Run named-entity recognition on the text sequence.
None: Default (no token classification)
When in use, it appends a new "ner" dictionary to the return dictionary.
3. Run Bias Analysis
result = text_pipeline.analyze("Data scientists are so smart")4. Example Output
Last updated