By Patricia Waldron
Artificial intelligence and other digital technologies have transformed the way that people live and work, bringing unprecedented opportunities and risks.
To help people navigate our increasingly tech-driven world, Natalie Bazarova, professor of communication in the Cornell University College of Agriculture and Life Sciences, and Qian Yang, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science, have founded the Digital and AI Literacy Initiative. They aim to develop digital literacy resources for underserved communities, to equip all people with the ability to safely and responsibly use these technologies. They also hope these efforts will help prevent AI from deepening existing inequalities.
CALS and Cornell Bowers CIS have equally awarded Bazarova and Yang $300,000 as seed funding for the initiative, bringing together the two colleges’ extensive expertise in communication and AI.
In founding the initiative, Bazarova was inspired by the CALS Roadmap to 2050, which outlines the college’s goal to develop and share transdisciplinary solutions to major 21st century problems. “There is a strong emphasis on solutions,” Bazarova said. “And those solutions have to be done through interdisciplinary collaborations and community partnerships.”
The cross-college effort is also in concert with the Cornell AI Initiative, a Radical Collaboration put forth by scholars from across the university, to elevate Cornell as a leader in AI development, education, and ethics.
“AI has powerful applications and it is imperative that this technology be applied to minimize inequity, not exacerbate it,” said Kavita Bala, dean of Cornell Bowers CIS and lead dean of the Cornell AI Initiative.
Bazarova and Yang intend to leverage Cornell’s strengths in the social sciences, communication, AI and human-computer interaction to understand and address the complex, interconnected factors that put people at risk of cyberbullying, phishing, or other cybercrimes. “Often, people blame bad actors, or blame bad tech, but more often, I believe it's an interplay between the two,” Yang said. “So, how can we address these, and at the same time protect vulnerable people?”
One of the initial projects will build upon Social Media TestDrive, a tool developed by Bazarova’s group in collaboration with Common Sense Education that has helped more than 660,000 students to become better digital citizens who stand up to cyberbullying instead of remaining a bystander. Currently, the program allows kids to practice using social media through a simulation, but Yang and Bazarova are developing an AI-driven social media learning co-pilot that will encourage positive behavior as youth use social media in the real world. This involves algorithms to detect cyberbullying language and a chatbot to coach kids while using the platforms.
“We are exploring this idea of how we can leverage AI, not just to correct and remove the bad content online, but actually cultivate a better social media culture, where people encourage each other to be good citizens,” Yang said.
In the future, Bazarova and Yang plan to expand to other vulnerable populations, such as the elderly, asylum seekers, and patients seeking healthcare. They will explore a range of learning technologies, such as learning simulations, gamification, and multimedia experiences, each tailored to the specific community they aim to reach.
Through the initiative, Bazarova and Yang seek to forge further cross-campus collaborations, such as with researchers at Cornell Brooks School of Public Policy and Cornell Tech, and other Cornell initiatives with complementary goals, such as the Center for Health Equity, the Design & Technology Initiative, and the Digital Life Initiative.
They are planning a public event to showcase technology work at Cornell with positive public impact, and a brown bag series to build a community of scholars pursuing similar goals across disciplines.
“The problem we're trying to address is so broad. It’s important to bring in different stakeholders into the process,” Bazarova said. “We wanted to create a platform that can help everyone do this kind of work.”
Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.