- About
- Courses
- Research
- Computational Social Science
- Critical Data Studies
- Data Science
- Economics and Information
- Education Technology
- Ethics, Law and Policy
- Human-Computer Interaction
- Human-Robot Interaction
- Incentives and Computation
- Infrastructure Studies
- Interface Design and Ubiquitous Computing
- Natural Language Processing
- Network Science
- Social Computing and Computer-supported Cooperative Work
- Technology and Equity
- People
- Career
- Undergraduate
- Info Sci Majors
- BA - Information Science (College of Arts & Sciences)
- BS - Information Science (CALS)
- BS - Information Science, Systems, and Technology
- MPS Early Credit Option
- Independent Research
- CPT Procedures
- Student Associations
- Undergraduate Minor in Info Sci
- Our Students and Alumni
- Graduation Info
- Contact Us
- Info Sci Majors
- Masters
- PHD
- Prospective PhD Students
- Admissions
- Degree Requirements and Curriculum
- Grad Student Orgs
- For Current PhDs
- Diversity and Inclusion
- Our Students and Alumni
- Graduation Info
- Program Contacts and Student Advising
Alexander is a doctoral candidate at the Centre for Technomoral Futures and philosophy department at the University of Edinburgh. His research focuses on the intersection of philosophy of science and AI ethics and, in particular, the use of machine learning in development economics.
Talk: Stop Predicting? Machine Learning and Measurement
Abstract: Motivated by a rather narrow focus on predictive accuracy and a problematic framing of “ground truth data,” recent scholarship has connected concepts and practices from measurement to applied machine learning and AI ethics (Jacobs & Wallach, 2021; Mussgnug, 2022; Tal, 2023). This talk will give an overview of this emerging domain of research while underscoring its two different orientations.
On one side of the debate, authors have emphasized the benefits of bringing approaches from metrology (the science of measurement) to bear on applied machine learning research. For instance, Eran Tal (2023) has advocated for the adoption of a metrological notion of accuracy to address a particular source of bias in machine learning applications to health care. Relatedly, Jacobs and Wallach (2021) have presented measurement validation as a framework for disentangling debates surrounding algorithmic fairness.
On the other side of the debate, I propose a much more radical solution to some epistemic and ethical issues relating to the reliability and fairness of machine learning applications. Rather than bringing metrology to bear on applied machine learning, I argue that certain machine learning applications should be understood and, thus, developed as forms of measurement themselves. In this talk, I will justify my position and develop this argument further. Ultimately, I argue applied machine learning should stop “predicting” (almost) entirely.