- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Join us at 2:30 p.m. (EST) Friday, March 12, 2021, for a virtual Info Sci Colloquium with Amy Pavel, who presents "Human-AI Systems for Making Videos Useful".
Amy Pavel is a Postdoctoral Fellow at Carnegie Mellon University and a Research Scientist in AI/ML at Apple. Her research explores AI-driven interactive techniques for making digital communication effective and accessible for all. Her work creating Human-AI systems to improve communication has appeared at ACM/IEEE conferences including UIST, CHI, ASSETS, and VR. She recently served as an associate chair for the UIST and CHI program committees and was selected as a Rising Star in EECS. She previously received her Ph.D. in Computer Science at UC Berkeley, where her work developing interactive video abstractions was supported by an NDSEG fellowship and an EECS Excellence Award. Read more about her research at: https://amypavel.com/
Talk: Human-AI Systems for Making Videos Useful
Abstract: Video is becoming a core medium for communicating a wide range of content, including educational lectures, vlogs, and how-to tutorials. While videos are engaging and informative, they lack the familiar and useful affordances of text for browsing, skimming, and flexibly transforming information. This severely limits who can interact with video content and how they can interact with it, makes editing a laborious process, and means that much of the information in videos is not accessible to everyone.
But, what future systems will make videos useful for all users?
In this talk, I’ll share my work creating interactive Human-AI systems that combine the benefits of multiple mediums of communication (e.g., text, video, and audio) in two key areas: 1) helping domain experts find content of interest in videos, and 2) making videos accessible to people who are blind or have visual impairments. First, I’ll discuss core challenges of finding information in videos from interviews with domain experts and people with disabilities. Then, I will present new systems that leverage AI, and the results of technical and user evaluations that demonstrate system efficacy. I will conclude with how hybrid HCI-AI breakthroughs will make digital communication more effective and accessible in the future, and how new interactions can help us to realize the full potential of recent AI/ML advances.