Fernando Diaz is a research scientist at Google Research Montréal. His research focuses on the design of information access systems, including search engines and recommender systems. He is particularly interested in understanding and addressing the broader societal implications of these technologies and artificial intelligence more generally. Previously, Fernando was the assistant managing director of Microsoft Research Montréal and the manager for MSR Montréal's group focused on Fairness, Accountability, Transparency, and Ethics (FATE). Before that, he was a director of research at Spotify, where he established its research organization on recommendation, search, and personalization. Fernando and his collaborators have received awards for their contributions at SIGIR, WSDM, ISCRAM, ECIR, CSCW, and CIKM. He holds a CIFAR AI Chair and is the recipient of the 2017 British Computer Society Karen Spärck Jones Award for young information retrieval and natural language processing researchers. Fernando has co-organized several NIST TREC initiatives, WSDM (2013), FAccT (2019), SIGIR (2021), and the CIFAR Workshop on Artificial Intelligence and the Curation of Culture (2019). Fernando received his PhD from the University of Massachusetts Amherst in 2008.

Talk: Responsible Design of Information Access Systems

Watch this talk via Zoom // passcode: 357582

Abstract: Information access systems such as search engines and recommender systems mediate the interaction between people and overwhelming repositories of consumable data, including web content, music catalogs, and social media. The prevalence of information access problems has led to the adoption of ranking-based search and recommendation algorithms across a variety of online services, either as a core feature or supporting technology. While effective in research settings, when deployed in production environments, these algorithms can surface a variety of unanticipated social harms—including the unfair allocation of exposure, misinformation, and stereotype reinforcement. This talk will introduce a research program on the responsible design of information access systems focused on understanding the relationship between algorithms, individuals, and society. In order to explore this approach, I will present recent work on the measurement and mitigation of unfairness in ranking systems. I will begin by discussing how inherent properties in ranking tasks and their solutions result in unequal effectiveness for both end users and content creators. Motivated by these issues, I will then define the expected exposure metric, a new evaluation measure based on user behavior models that generalizes classic utility metrics so as to incorporate unfairness. In order to mitigate unfairness in existing ranking algorithms, I will describe and evaluate a stochastic algorithm that directly optimizes expected exposure. Although grounded in information access, these results have implications for more general ranking settings found in natural language processing and machine learning. I will close by proposing future work focused on deepening and broadening the field of responsible information access.