Elissa Redmiles is a Ph.D. Candidate in Computer Science at the University of Maryland and has been a visiting researcher with the Max Planck Institute for Software Systems and the University of Zurich. Elissa’s research interests are broadly in the areas of security and privacy. She uses computational, economic, and social science methods to conduct research on behavioral security. Elissa seeks to understand users’ security and privacy decision-making processes and specifically investigate inequalities that arise in these processes and to mitigate those inequalities through the design of systems that facilitate safety equitably across users. Elissa is the recipient of a NSF Graduate Research Fellowship, a National Science Defense and Engineering Graduate Fellowship, and a Facebook Fellowship. Her work has appeared in popular press publications such as Scientific American, Business Insider, Newsweek, and CNET and has been recognized with the John Karat Usable Privacy and Security Student Research Award, a Distinguished Paper Award at USENIX Security 2018, and a University of Maryland Outstanding Graduate Student Award.
Talk: Security for All: Modeling Structural Inequities to Design More Secure Systems
Abstract: Users often fall for phishing emails, reuse simple passwords, and fail to effectively utilize "provably" secure systems. These behaviors expose users to significant harm and frustrate industry practitioners and security researchers alike. As consequences of security breaches become ever more grave, it is important to study why humans behave seemingly irrationally. In this talk, I will illustrate how modeling the effects of structural inequities -- variance in skill, socioeconomic status, as well as culture and gender identity -- can both explain apparent irrationality in users’ security behavior and offer tangible improvements in industry systems. Modeling and mitigating security inequities requires a combination of techniques from economic, data scientific, and social science methodologies to develop new tools for systematically understanding and mitigating insecure behavior.
Through novel experimental methodology, I empirically show strong evidence of bounded rationality in security behavior: Users make mathematically modelable tradeoffs between the protection offered by security behaviors and the costs of practicing those behaviors, which even in a highly usable system may outweigh the benefits, especially for less resourced users. These findings emphasize the need for industry systems that balance structural inequities and accommodate behavioral variance between users rather than one-size-fits-all security solutions. More broadly, my techniques for modeling and accounting for inequities have offered key insights in growing technical areas beyond security, including algorithmic fairness.