- About
- Message from the Chair
- History
- Facilities
- News
- Events
- Info Sci Colloquium
- Technopopulism and the Assault on Indian Democracy
- Generative Agents: Interactive Simulacra of Human Behavior
- AGI is Coming… Is HCI Ready?
- Algorithmic Governance: Auditing Online Systems for Bias and Misinformation
- Studying GenAI as a Cultural Technology: Provocations for Understanding the Cultural Entanglements of AI
- The State of Design Knowledge in Human-AI Interaction
- Understanding “Knowledge”: How Social Epistemology Can Help HCI and AI Researchers to Shape the Future of Generative AI
- How Easy Access to Statistical Likelihoods of Everything Will Change Interaction with Computers
- IS Engaged
- Graduation Info
- Ethics and Politics in Computing Colloquium
- Info Sci Colloquium
- Contact Us
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Meredith Ringel Morris is Director for Human-AI Interaction Research at Google DeepMind. Prior to joining DeepMind, she was Director of the People + AI Research team in Google Research’s Responsible AI division. She also previously served as Research Area Manager for Interaction, Accessibility, and Mixed Reality at Microsoft Research. In addition to her industry role, Dr. Morris has a faculty appointment at the University of Washington, where she is an Affiliate Professor in The Paul G. Allen School of Computer Science & Engineering and also in The Information School. Dr. Morris has been recognized as a Fellow of the ACM and as a member of the ACM SIGCHI Academy for her contributions to Human-Computer Interaction research. She earned her Sc.B. in computer science from Brown University and her M.S. and Ph.D. in computer science from Stanford University. More details on her research and publications are available at http://merrie.info.
Talk: AGI is Coming… Is HCI Ready?
Abstract: We are at a transformational junction in computing, in the midst of an explosion in capabilities of foundational AI models that may soon match or exceed typical human abilities for a wide variety of cognitive tasks, a milestone often termed Artificial General Intelligence (AGI). Achieving AGI (or even closely approaching it) will transform computing, with ramifications permeating through all aspects of society. This is a critical moment not only for Machine Learning research, but also for the field of Human-Computer Interaction (HCI).
In this talk, I will define what I mean (and what I do NOT mean) by “AGI.” I will then discuss how this new era of computing necessitates a new sociotechnical research agenda on methods and interfaces for studying and interacting with AGI. For instance, how can we extend status quo design and prototyping methods for envisioning novel experiences at the limits of our current imaginations? What novel interaction modalities might AGI (or superintelligence) enable? How do we create interfaces for computing systems that may intentionally or unintentionally deceive an end-user? How do we bridge the “gulf of evaluation” when a system may arrive at an answer through methods that fundamentally differ from human mental models, or that may be too complex for an individual user to grasp? How do we evaluate technologies that may have unanticipated systemic side-effects on society when released into the wild?
I will close by reflecting on the relationship between HCI and AI research. Typically, HCI and other sociotechnical domains are not considered as core to the ML research community as areas like model building. However, I argue that research on Human-AI Interaction and the societal impacts of AI is vital and central to this moment in computing history. HCI must not become a “second class citizen” to AI, but rather be recognized as fundamental to ensuring the path to AGI and beyond is a beneficial one.