Bhargavi Ganesh is a PhD Student in Informatics within the University of Edinburgh’s Centre for Technomoral Futures, where she uses mixed methods approaches to evaluate the design of organizational and regulatory governance measures for AI. Bhargavi previously worked as a policy researcher and data scientist within government and nonprofit institutions, focused on the impact of consumer finance policies on marginalized groups. She holds a Bachelor’s in Economics from New York University and a Master’s in Computational Analysis and Public Policy from the University of Chicago.

Attend this talk via ZOOM

Abstract: In the past few years, the ubiquity of AI systems, coupled with greater awareness of the harms generated by their use, has resulted in calls for more robust governance of these systems. Despite the emergence of promising policy proposals worldwide, however, AI governance continues to treated by many scholars and practitioners as a proverbial Gordian knot, intractable due to the technical/organisational complexity of sociotechnical AI systems, and a fear that imperfect regulation will result in the irrevocable suppression of technological innovation. In this presentation, I will draw on the historical example of a previously “ungovernable” technology- the steamboat in the 1800’s- to challenge latent scepticism and argue that the governance of AI should in and of itself be seen as an incremental exercise in innovation. Although the comparison may not be immediately intuitive, both steamboats and AI have generated challenges related to causal opacity – a difficulty understanding how systems fail and why – and the distribution of accountability across many hands. In the steamboat era, the US government responded to these challenges by developing governance methods that were innovative at the time, including information gathering, independent testing/targeted funding, devising new forms of legal liability, and creating the first agency tasked with safety regulation. Steamboat governance was necessarily iterative, requiring many instances of trial and error before achieving its aims. While building on previously developed policy instruments, like licensing, auditing, etc. represents an important first step for AI governance, applying lessons from the steamboat era can enable us to also be open to testing more innovative approaches. Viewing global AI governance as a testbed for policy innovation can thus enable us to celebrate the progress that has already been made, remain optimistic about the emergence of new regulatory interventions, and pushback against implicit fatalism regarding the ability of policymakers to govern AI.