# Recent Papers Timeline

<table data-full-width="true"><thead><tr><th width="147">Date</th><th width="497">Title</th><th>Authors</th></tr></thead><tbody><tr><td>October 2024</td><td><a href="https://arxiv.org/abs/2410.08388">GUS-Net: Social Bias Classification in Text with Generalizations, Unfairness, and Stereotype.</a></td><td>Maximus Powers, Hua Wei, Umang Mavani, Harshitha Reddy Jonala, Ansh Tiwari</td></tr><tr><td>September 2024</td><td><a href="https://arxiv.org/abs/2409.12651">A Deep Dive into Fairness, Bias, Threats, and Privacy in Recommender Systems: Insights and Future Research</a></td><td>Falguni Roy, Xiaofeng Ding, K.-K. R. Choo, Pan Zhou</td></tr><tr><td>July 2024</td><td><a href="https://arxiv.org/abs/2407.10241">BiasAlert: A Plug-and-play Tool for Social Bias Detection in LLMs</a></td><td>Zhiting Fan, Ruizhe Chen, Ruiling Xu, Zuozhu Liu</td></tr><tr><td>July 2024</td><td><a href="https://arxiv.org/abs/2407.18689">The BIAS Detection Framework: Bias Detection in Word Embeddings and Language Models for European Languages</a></td><td>Alexandre Puttick, Leander Rankwiler, Catherine Ikae, Mascha Kurpicz-Briki</td></tr><tr><td>May 2024</td><td><a href="https://arxiv.org/abs/2405.11290">MBIAS: Mitigating Bias in Large Language Models While Retaining Context</a></td><td>Shaina Raza, Ananya Raval, Veronica Chatrath</td></tr><tr><td>May 2024</td><td><a href="https://doi.org/10.1162/dint_a_00255">FAIR Enough: Develop and Assess a FAIR-Compliant Dataset for Large Language Model Training?</a></td><td>Shaina Raza, Shardul Ghuge, Chen Ding, Elham Dolatabadi, Deval Pandya</td></tr><tr><td>May 2024</td><td><a href="https://doi.org/10.1109/tcss.2024.3392469">Unlocking Bias Detection: Leveraging Transformer-Based Models for Content Analysis</a></td><td>Shaina Raza, Oluwanifemi Bamgbose, Veronica Chatrath, Shardule Ghuge, Yan Sidyakin, Abdullah Yahya Mohammed Muaad</td></tr><tr><td>February 2024</td><td><a href="https://www.sciencedirect.com/science/article/pii/S2949719124000463?via%3Dihub">HarmonyNet: Navigating hate speech detection</a></td><td>Shaina Raza, Veronica Chatrath</td></tr><tr><td>December 2023</td><td><a href="https://aclanthology.org/2023.icon-1.33/">Bias Detection Using Textual Representation of Multimedia Contents</a></td><td>N/A</td></tr><tr><td>November 2023</td><td><a href="https://proceedings.neurips.cc/paper_files/paper/2023/hash/b01153e7112b347d8ed54f317840d8af-Abstract-Datasets_and_Benchmarks.html">Stable Bias: Evaluating Societal Representations in Diffusion Models</a></td><td>Sasha Luccioni, Christopher Akiki, Margaret Mitchell, Yacine Jernite</td></tr><tr><td>September 2023</td><td><a href="https://www.sciencedirect.com/science/article/abs/pii/S0957417423020444?via%3Dihub">Nbias: A natural language processing framework for BIAS identification in text</a></td><td>Shaina Raza, Muskan Garg, Deepak John Reji, Syed Raza Bashir, Chen Ding</td></tr><tr><td>August 2023</td><td><a href="https://doi.org/10.1186/s44247-023-00029-w">A framework for multi-faceted content analysis of social media chatter regarding non-medical use of prescription medications</a></td><td>Shaina Raza, Brian Schwartz, Sahithi Lakamana, Yao Ge, Abeed Sarker</td></tr><tr><td>June 2023</td><td><a href="https://doi.org/10.1609/icwsm.v17i1.22160">Retweet-BERT: Political Leaning Detection Using Language Features and Information Diffusion on Social Networks</a></td><td>Julie Jiang, Xiang Ren, Emilio Ferrara</td></tr><tr><td>September 2023</td><td><a href="https://arxiv.org/abs/2309.00770">Bias and Fairness in Large Language Models: A Survey</a></td><td>Isabel O. Gallegos, Ryan A. Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, Nesreen K. Ahmed</td></tr><tr><td>October 2022</td><td><a href="https://arxiv.org/abs/2202.08011">Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks</a></td><td>Jingyan Zhou, Jiawen Deng, Fei Mi, Yitong Li, Yasheng Wang, Minlie Huang, Xin Jiang, Qun Liu, Helen Meng</td></tr><tr><td>July 2022</td><td><a href="https://doi.org/10.1145/3477495.3531945">Bias Mitigation for Toxicity Detection via Sequential Decisions</a></td><td>Lu Cheng, Ahmadreza Mosallanezhad, Yasin N. Silva, Deborah L. Hall, Huan Liu</td></tr><tr><td>December 2022</td><td><a href="https://aclanthology.org/2022.findings-emnlp.262">Towards Identifying Social Bias in Dialog Systems: Framework, Dataset, and Benchmark</a></td><td>Jingyan Zhou, Jiawen Deng, Fei Mi, Yitong Li, Yasheng Wang, Minlie Huang, Xin Jiang, Qun Liu, Helen Meng</td></tr><tr><td>December 2021</td><td><a href="https://arxiv.org/abs/2112.07868">Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases</a></td><td>Shrimai Prabhumoye, Rafal Kocielnik, Mohammad Shoeybi, Anima Anandkumar, Bryan Catanzaro</td></tr><tr><td>September 2022</td><td><a href="https://arxiv.org/abs/2209.14557">Neural Media Bias Detection Using Distant Supervision With BABE -- Bias Annotations By Experts</a></td><td>Timo Spinde, Manuel Plank, Jan-David Krieger, Terry Ruas, Bela Gipp, Akiko Aizawa</td></tr><tr><td>August 2022</td><td><a href="https://arxiv.org/abs/2208.05777">Dbias: Detecting biases and ensuring Fairness in news articles</a></td><td>Shaina Raza, Deepak John Reji, Chen Ding</td></tr></tbody></table>

Help us fill this out, if you know of any papers we should add, send a message in [discord](https://discord.gg/Jn6TYxwRjy).
