- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
William Isaac is a Staff Research Scientist at DeepMind, Advisory Board Member of the Human Rights Data Analysis Group, and Research Affiliate at Oxford University Centre for the Governance of AI. He received his PhD in Political Science from Michigan State and a Masters in Public Policy from George Mason University. His research focuses on the societal impact and governance of emerging technologies. Prior to DeepMind, William served as an Open Society Foundations Fellow. His research has been featured in publications such as Science, New York Times, and the MIT Technology Review.
Talk: Algorithmic Fairness from a Sociotechnical Lens
Abstract: Emerging technologies, such as artificial intelligence (AI), are increasingly portrayed as being poised to reshape modern society and achieve far-reaching societal transformation. Yet many of the initial instances of deployed real-world AI systems have also manifested salient societal risks, with the potential for acute harms to historically marginalized peoples. In response, various stakeholders have sought to propose ethical guidelines and technical mitigations to serve as ethical guardrails for future technology deployment and development.
In this talk, I propose that in order for ethical frameworks and technical mitigations to be effective in re-aligning technology for socially beneficial outcomes, social & historical insights must be meaningfully taken into account. Using examples from multiple domains in the context of algorithmic fairness, I will discuss how critical evaluations which blend insight from data science and the social sciences can lead to a dynamic sociotechnical “lens” to highlight societal harms or identify socially beneficial applications. The talk will conclude with a discussion of potential challenges to adopting a sociotechnical lens in practice and open questions for future research.