- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Hoda Heidari is currently a Postdoctoral Associate at the Department of Computer Science at Cornell University, where she collaborates with Professors Jon Kleinberg, Karen Levy, and Solon Barocas through the AIPP (Artificial Intelligence, Policy, and Practice) initiative. Hoda’s research is broadly concerned with the societal and economic aspects of Artificial Intelligence, and in particular, the issues of fairness and explainability for Machine Learning. She utilizes tools and methods from Computer Science (Algorithms, AI, and ML) and Social Sciences (Economics and Political Philosophy) to quantify and mitigate the inequalities that arise when socially consequential decisions are automated. Her work has appeared in top-tier Computer Science venues, such as ICML, NeurIPS, KDD, AAAI, IJCAI, and EC.
Before coming to Cornell, Hoda was a Postdoctoral Fellow at the Machine Learning Institute of ETH Zürich, working under the supervision of Professor Andreas Krause. Hoda completed her doctoral studies in Computer and Information Science at the University of Pennsylvania, where she was advised by Professors Michael Kearns and Ali Jadbabaie. Hoda has organized multiple academic events on the topic of her research, including a tutorial at the Web Conference (WWW) and a workshop at the Neural and Information Processing Systems (NeurIPS) conference. Beyond computer science venues, she has been invited to and participated in numerous interdisciplinary panels and discussions addressing the implications of AI for society.
Talk: "Distributive Justice for Machine Learning: An Interdisciplinary Perspective on Defining, Measuring, and Mitigating Algorithmic Unfairness"
Abstract: Automated decision-making tools are increasingly in charge of making high-stakes decisions for people—in areas such as education, credit lending, criminal justice, and beyond. These tools can exhibit and exacerbate certain undesirable biases and disparately harm already disadvantaged and marginalized groups and individuals. In this talk, I will illustrate how we can bring together tools and methods from computer science, economics, and political philosophy to define, measure, and mitigate algorithmic unfairness in a principled manner. In particular, I will address two key questions:
- Given the appropriate notion of harm/benefit, how should we measure and bound unfairness? Existing notions of fairness focus on defining conditions of fairness, but they do not offer a proper measure of unfairness. In practice, however, designers often need to select the least unfair model among a feasible set of unfair alternatives. I present (income) inequality indices from economics as a unifying framework for measuring unfairness--both at the individual- and group-level. I propose the use of cardinal social welfare functions as an alternative measure of fairness behind a veil of ignorance and a computationally tractable method for bounding inequality.
- Given a specific decision-making context, how should we define fairness as the equality of some notion of harm/benefit across socially salient groups? First, I will offer a framework to think about this question normatively. I map the recently proposed notions of group-fairness to models of equality of opportunity. This mapping provides a unifying framework for understanding these notions, and importantly, allows us to spell out the moral assumptions underlying each one of them. Second, I give a descriptive answer to the question of “fairness as equality of what?” I mention a series of adaptive human-subject experiments we recently conducted to understand which existing notion best captures laypeople’s perception of fairness.