« search calendars« TRIPODS (Transdisciplinary Research in Principles of Data Science) Seminar Series

« Instance Dependent Sample Complexity Bounds for Interactive Learning

Instance Dependent Sample Complexity Bounds for Interactive Learning

April 22, 2022, 4:00 PM - 5:00 PM

Location:

Online Event

Kevin Jamieson, University of Washington

The sample complexity of an interactive learning problem, such as multi-armed bandits or reinforcement learning, is the number of interactions with nature required to output an answer (e.g., a recommended arm or policy) that is approximately close to optimal with high probability. While minimax guarantees can be useful rules of thumb to gauge the difficulty of a problem class, algorithms optimized for this worst-case metric often fail to adapt to “easy” instances where fewer samples suffice. In this talk, I will highlight some my group’s work on algorithms that obtain optimal, finite time, instance dependent sample complexities that scale with the true difficulty of the particular instance, versus just the worst-case. In particular, I will describe a unifying experimental design based approach used to obtain such algorithms for best-arm identification for linear bandits, contextual bandits with arbitrary policy classes, and smooth losses for linear dynamical systems.

Bio:
Kevin Jamieson is an Assistant Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington and is the Guestrin Endowed Professor in Artificial Intelligence and Machine Learning. He received his B.S. in 2009 from the University of Washington, his M.S. in 2010 from Columbia University, and his Ph.D. in 2015 from the University of Wisconsin - Madison under the advisement of Robert Nowak, all in electrical engineering. He returned to the University of Washington as faculty in 2017 after a postdoc with Benjamin Recht at the University of California, Berkeley. Jamieson's work has been recognized by an NSF CAREER award and Amazon Faculty Research award. His research explores how to leverage already-collected data to inform what future measurements to make next, in a closed loop. The work ranges from theory to practical algorithms with guarantees to open-source machine learning systems and has been adopted in a range of applications, including measuring human perception in psychology studies, adaptive A/B/n testing in dynamic web-environments, numerical optimization, and efficient tuning of hyperparameters for deep neural networks.

Link to video:https://youtu.be/BLpT6QSfb9Y

 

**Please note special time 4:00PM

Presented Via Zoom: https://rutgers.zoom.us/j/94622460207

Meeting ID: 946 2246 0207

Password: 911959