- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Maria De-Arteaga is a joint PhD candidate in Machine Learning and Public Policy at Carnegie Mellon University’s Machine Learning Department and Heinz College. She holds a M.Sc. in Machine Learning from Carnegie Mellon University (2017) and a B.Sc. in Mathematics from Universidad Nacional de Colombia (2013). She was an intern at Microsoft Research, Redmond, in 2017 and at Microsoft Research, New England, in 2018. Prior to graduate school, she worked as a data science researcher and as an investigative journalist. Her work has been awarded the Best Thematic Paper Award at NAACL’19, the Innovation Award on Data Science at Data for Policy’16, and has been featured by UN Women and Global Pulse in their report Gender Equality and Big Data: Making Gender Data Visible. She is a co-founder of the NeurIPS Machine Learning for the Developing World (ML4D) Workshop, and a recipient of a 2018 Microsoft Research Dissertation Grant.
Talk: Machine Learning in High-Stakes Settings: Risks and Opportunities
Abstract: Machine learning (ML) is increasingly being used to support decision-making in critical settings, where predictions have potentially grave implications over human lives. Examples include healthcare, hiring, child welfare, and the criminal justice system. In this talk, I will characterize how societal biases encoded in data may be compounded by ML models, and I will present an approach to mitigate biases without assuming access to protected attributes. Moreover, even if data does not encode discriminatory biases, limitations of the observed outcomes still hinder the effective application of standard ML methods to improve decision-making. I will discuss some of these challenges, such as the selective labels problem and omitted payoff bias, and I will propose methodology to estimate and leverage human consistency in order to reduce the gap between what experts care about and what machines optimize.