- About
- Message from the Chair
- History
- Facilities
- News
- Events
- Info Sci Colloquium
- Technopopulism and the Assault on Indian Democracy
- Generative Agents: Interactive Simulacra of Human Behavior
- AGI is Coming… Is HCI Ready?
- Algorithmic Governance: Auditing Online Systems for Bias and Misinformation
- Studying GenAI as a Cultural Technology: Provocations for Understanding the Cultural Entanglements of AI
- The State of Design Knowledge in Human-AI Interaction
- Amy Bruckman, Georgia Tech
- Jeff Bigham, CMU and Apple
- IS Engaged
- Graduation Info
- Ethics and Politics in Computing Colloquium
- Info Sci Colloquium
- Contact Us
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Angelina Wang is a Computer Science PhD student at Princeton University advised by Olga Russakovsky. Her research is in the area of machine learning fairness and algorithmic bias. She has been recognized by the NSF GRFP, EECS Rising Stars, Siebel Scholarship, and Microsoft AI & Society Fellowship. She has published in top machine learning (ICML, AAAI), computer vision (ICCV, IJCV), responsible computing (FAccT, JRC), and interdisciplinary (Big Data & Society) venues, including spotlight and oral presentations. Previously, she has interned with Microsoft Research and Arthur AI, and received a B.S. in Electrical Engineering and Computer Science from UC Berkeley.
Abstract: With the widespread proliferation of machine learning, there arises both the opportunity for societal benefit as well as the risk of harm. Approaching responsible machine learning is challenging because technical approaches may prioritize a mathematical definition of fairness that correlates poorly to real-world constructs of fairness due to too many layers of abstraction. Conversely, social approaches that engage with prescriptive theories may produce findings that are too abstract to effectively translate into practice. In my research, I bridge these approaches and utilize social implications to guide technical work. I will discuss three research directions that show how, despite the technically convenient approach of considering equality acontextually, a stronger engagement with societal context allows us to operationalize a more equitable formulation. First, I will introduce a dataset tool that we developed to analyze complex, socially-grounded forms of visual bias. Then, I will provide empirical evidence to support how we should incorporate societal context in bringing intersectionality into machine learning. Finally, I will discuss how in the excitement of using LLMs for tasks like human replacement, we have neglected to consider the importance of human positionality. Overall, I will explore how we can expand a narrow focus on equality in responsible machine learning to encompass a broader understanding of equity that substantively engages with societal context.