Tanu Mitra is an Associate Professor at the University of Washington, Information School, where she leads the Social Computing and ALgorithmic Experiences (SCALE) lab group. Her research blends human-centered data science and social science principles to develop new knowledge, methods, and systems to defend against the epistemic risks of online mis(dis)information, bias, hate and harms. To do so, she employs an interdisciplinary approach combining human computer interaction, social computing, machine learning, and natural language processing. Tanu’s work has been supported by grants from the NSF, NIH, DoD, Social Science One, and other Foundations. Her research has been recognized through multiple awards and honors, including an NSF-CRII, an early career ONR-YIP, Adamic-Glance Distinguished Young Researcher award and Virginia Tech College of Engineering Outstanding New Assistant Professor award, along with several best paper awards. Dr. Mitra currently serves on Spotify’s safety advisory board and has previously served on the advisory board of the Social Science Research Council’s Social Data Initiative.

Attend this talk via ZOOM

Talk: Algorithmic Governance: Auditing Online Systems for Bias and Misinformation

Abstract: Large-scale online systems are fundamental to how people consume information. The emergence of large language models and industrial applications like ChatGPT have further exacerbated people’s dependency on online systems as their primary information source. Yet, they pose significant epistemic risks characterized by the prevalence of harmful misinformation and biased content. The risks are further amplified by the algorithms powering these online platforms, be it YouTube’s video recommendation or generative AI powered LLM interfaces. How do we systematically investigate algorithmic bias and misinformation? How do we govern algorithmic systems to safeguard against problematic content? In this talk, I will present a series of algorithmic audit studies. The first one audits the search and recommendation algorithms of social platforms like YouTube and Amazon for misinformation, while the second audits LLMs for cultural bias, particularly in the context of the Global South. I will end with ideas for how we can develop effective long-term algorithmic governance, the challenges in doing so and the new governance challenges and opportunities that are emerging with the recent advances in the field of large language models.