By Jackie Swift for the Cornell Chronicle

Every Sunday, Beth Lyon meets with a team of students to read xenophobic hate speech on Twitter. Their exploration of this dark corner of the internet is part of a project that aims to track anti-immigrant hate speech in real time.

The work is urgent because anti-foreigner speech amplified on social media is helping to fuel xenophobia and jingoistic politics in the United States and throughout the world, said Lyon, clinical professor of law, associate dean for experiential education and clinical program director in Cornell Law School.

Nationalist sentiments also promote immigrants’ dispossession and exclusion, she said. Yet despite the profound impact of online hate speech aimed specifically at foreign nationals, she said, there is no accessible means for monitoring it.

Leshed, Gilly

Leshed, Gilly
Leshed, Gilly
To address this need, Lyon joined with Gilly Leshed, senior lecturer in the Cornell Ann S. Bowers College of Computing and Information Science, Marten van Schijndel, assistant professor in the Department of Linguistics in the College of Arts and Sciences, and external collaborators to develop the xenophobia meter project.

The project aims to create a public-facing website platform service to track anti-foreigner speech on Twitter. It recently received grant support from Global Cornell’s Migrations initiative and Mario Einaudi Center for International Studies.

As a first step in the creation of the xenophobia meter, the team has turned to machine learning, teaching a computer algorithm the difference between xenophobic and nonxenophobic speech so that it can eventually recognize when a tweet is likely to be xenophobic.

That’s where the Sunday gathering comes in. The students comb through Twitter and assess tweets based on levels of pro- or anti-foreigner sentiment, using a numerical scale developed by the researchers. Three people must label each tweet so an average can be assigned.

“It’s a big job,” Lyon said. “Our human labelers will have to assess and label at least 10,000 tweets.”

The ultimate goal is a social media-monitoring platform with information on countries around the globe, available 24/7. The site will live-monitor social media, visualizing in real time the levels of hate associated with posts coming from each country. The website will also present data on the situation for foreign nationals in every country, including the percentage of immigrants and top countries of origin, Lyon said.

The xenophobia meter’s Migrations support is part of the initiative’s Mellon Foundation funding for research highlighting connections between racism, dispossession, and migration.

Lily Pagan ’20, a software engineer at Google, was part of the founding student team and continues to consult on the project. “It’s a hard technical problem to solve,” she said, “but as the AI matures, we would like to build an early warning system – a way to take a pulse and say, ‘Here’s what we see coming out of society right now.’”

The researchers hope policymakers will use the website to gauge the degree of xenophobia in their own countries to help plan strategies to address the problem. They also envision researchers and journalists using the content to monitor and report on anti-foreigner sentiment.

“Everything we do will be open source,” Lyon said, “so people can look at the data and decide if they agree with the algorithm: Is this xenophobic? Is it problematic? It’s really a way to curate annotated data and make the information widely available.”

Lyon and her collaborators started conceptualizing the project shortly before the COVID-19 pandemic began, when anti-Asian speech and hate actions escalated in the United States.

“Politicians at the highest levels of government continue to foment hate against people of Asian descent with pandemic scapegoating,” Lyon said. “Meanwhile, community members my clinic students and I work with frequently report being the target of xenophobic speech, including from supervisors who use anti-immigrant slurs in the workplace. Over the years some of the students on the project have talked about their families’ own experiences.”

“Creating this internet platform is a way to say, ‘This is wrong – and someone is paying attention to it,’” Lyon said.

Jackie Swift is a freelance writer for Global Cornell.