- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Robyn Caplan is a media and information policy scholar, with over 5 years experience conducting policy research in the non-profit sector. She is a PhD Candidate at Rutgers University (ABD, advisor Philip M. Napoli) in the School of Communication and Information Studies, and is an Affiliate at the Data & Society Research Institute, where she worked as a full-time researcher from 2015-2018, before leaving to complete her dissertation. She conducts research on issues related to platform governance and content standards. Her most recent work investigates the extent to which organizational dynamics at major platform companies impacts the development and enforcement of policy geared towards limiting disinformation and hate speech, and the impact of regulation, industry coordination, and advocacy can play in changing platform policies.
Talk: Tiers and Networks: The Shifting Strategies of Private Platform Governance
Abstract: Concerns about the spread of false information and hate speech online have spurred new debates about who should be setting standards for content in the information era. Platforms and technology companies, in particular, have come under public scrutiny due to their role in setting content standards – in determining what content is prioritized and what is removed – for individuals and organizations all over the world. At the same time, platforms are facing their own constraints (often self-imposed) in their role as global content moderators, lacking the resources, expertise, and context to address content concerns across cultures and politics. This talk examines hidden dimensions of platform decision-making in content governance. It looks at the ways in which platforms strategically use relationships with organizations and users to mitigate issues of scale and context in content policy. In particular, it examines these issues in the context of YouTube’s Partner Program and the controversies around the adpocalypse (between 2016-2018) to explore how users make sense YouTube’s shifting tiered governance strategies, and its stated values as an open platform for expression.