Please join us for colloquium guest speaker Dean Foster. Dean Foster is a Marie and Joseph Melone Professor, Professor of Statistics at the Wharton School of the University of Pennsylvania. Dean has pioneered two areas in game theory: stochastic evolutionary game dynamics and calibrated learning. In both cases he worked on the theory necessary to show convergence to equilibrium. The calibrated learning strategies he developed grew out of his work on individual sequences. In his work with Rakesh Vohra he coined the ideas of no-internal-regret and calibration. It is these learning rules that can be shown to converge to correlated equilibrium. Much of his current work is on statistical approaches to NLP problems and other issues in big data. He has come up with several algorithms for fast variable selection in regressions and has proven these to have nice theoretical properties. He has used vector models for words to allow them to be more easily manipulated using statistical technology. These often end up using spectral techniques, for example, as he has used them to fit HMMs and probabilistic CFG.
Title: Calibration: Games, Humans and Big data
Abstract: Calibration means that it should rain 1/2 the time on those days where a weather forecaster claims the chance of rain is 50%. This simple version of unbiasedness has connections in computer science, behavioral decision making and statistics. I'll tell you about some simple methods which have strong guarantees of calibration: They work even when nature is trying to fool them. I'll mention how these methods can be used to improve linear regressions run on large data sets. I'll touch on some of the implications for what we can expect of animal and human behavior. Lastly, I will show that this idea has implications for large network games.