- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Bhargavi Ganesh is a PhD Student in Informatics within the University of Edinburgh’s Centre for Technomoral Futures, where she uses mixed methods approaches to evaluate the design of organizational and regulatory governance measures for AI. Bhargavi previously worked as a policy researcher and data scientist within government and nonprofit institutions, focused on the impact of consumer finance policies on marginalized groups. She holds a Bachelor’s in Economics from New York University and a Master’s in Computational Analysis and Public Policy from the University of Chicago.
Abstract: In the past few years, the ubiquity of AI systems, coupled with greater awareness of the harms generated by their use, has resulted in calls for more robust governance of these systems. Despite the emergence of promising policy proposals worldwide, however, AI governance continues to treated by many scholars and practitioners as a proverbial Gordian knot, intractable due to the technical/organisational complexity of sociotechnical AI systems, and a fear that imperfect regulation will result in the irrevocable suppression of technological innovation. In this presentation, I will draw on the historical example of a previously “ungovernable” technology- the steamboat in the 1800’s- to challenge latent scepticism and argue that the governance of AI should in and of itself be seen as an incremental exercise in innovation. Although the comparison may not be immediately intuitive, both steamboats and AI have generated challenges related to causal opacity – a difficulty understanding how systems fail and why – and the distribution of accountability across many hands. In the steamboat era, the US government responded to these challenges by developing governance methods that were innovative at the time, including information gathering, independent testing/targeted funding, devising new forms of legal liability, and creating the first agency tasked with safety regulation. Steamboat governance was necessarily iterative, requiring many instances of trial and error before achieving its aims. While building on previously developed policy instruments, like licensing, auditing, etc. represents an important first step for AI governance, applying lessons from the steamboat era can enable us to also be open to testing more innovative approaches. Viewing global AI governance as a testbed for policy innovation can thus enable us to celebrate the progress that has already been made, remain optimistic about the emergence of new regulatory interventions, and pushback against implicit fatalism regarding the ability of policymakers to govern AI.