- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Hanrui Zhang is a Ph.D. student at Carnegie Mellon University, advised by Vincent Conitzer. He was named a finalist for the 2021 Facebook Fellowship. His work won the Best Student Paper Award at the European Symposia on Algorithms (ESA), and a Honorable Mention for Best Paper Award at the AAAI Conference on Human Computation and Crowdsourcing (HCOMP). He received his bachelor's degree in Yao's Class, Tsinghua University, where he won the Outstanding Undergraduate Thesis Award.
Talk: Designing and Analyzing Machine Learning Algorithms in the Presence of Strategic Behavior
Abstract: Machine learning algorithms now play a major part in all kinds of decision-making scenarios. When the stakes are high, self-interested agents --- about whom decisions are being made --- are increasingly tempted to manipulate the machine learning algorithm, in order to better fulfill their own goals, which are generally different from the decision maker's. This highlights the importance of making machine learning algorithms robust against manipulation. In this talk, I will focus on generalization (i.e., the bridge between training and testing) in strategic classification: Traditional wisdom suggests that a classifier trained on historical observations (i.e., the training set) usually also works well on future data points to be classified (i.e., the test set). I will show how this very general principle fails when agents being classified strategically respond to the classifier, and present an intuitive fix that leads to provable (and in fact, optimal) generalization guarantees under strategic manipulation. I will then discuss the role of incentive-compatibility in strategic classification, and present experimental results that illustrate how the theoretical results can guide practice. If time permits, I will also discuss distinguishing strategic agents with samples, and/or dynamic decision making with strategic agents.