« search calendars« DIMACS/TRIPODS Workshop on Optimization and Machine Learning

« Plenary talk: Representation, Optimization and Generalization Properties of Deep Neural Networks

Plenary talk: Representation, Optimization and Generalization Properties of Deep Neural Networks

August 13, 2018, 1:30 PM - 2:30 PM

Location:

Iacocca Hall

Lehigh University

Bethlehem PA

Click here for map.

Peter Bartlett, University of California, Berkeley

Deep neural networks have improved the state-of-the-art performance for prediction problems across an impressive range of application areas. This talk describes some recent results in three directions. First, we investigate the impact of depth on representational properties of deep residual networks, which compute near-identity maps at each layer, showing how their representational power improves with depth and that the functional optimization landscape has the desirable property that stationary points are optimal. Second, we study the implications for optimization in deep linear networks, showing how the success of a family of gradient descent algorithms that regularize towards the identity function depends on a positivity condition of the regression function. Third, we consider how the performance of deep networks on training data compares to their predictive accuracy, we demonstrate deviation bounds that scale with a certain "spectral complexity," and we compare the behavior of these bounds with the observed performance of these networks in practical problems.

Joint work with Steve Evans, Dylan Foster, Dave Helmbold, Phil Long, and Matus Telgarsky.