« search calendars« DIMACS/TRIPODS Workshop on Optimization and Machine Learning

« Stochastic Methods for Non-smooth Non-convex Optimization

Stochastic Methods for Non-smooth Non-convex Optimization

August 15, 2018, 3:00 PM - 3:30 PM

Location:

Iacocca Hall

Lehigh University

Bethlehem PA

Click here for map.

Damek Davis, Cornell University

We prove that the proximal stochastic subgradient method, applied to a weakly convex problem (i.e. difference of convex function and a quadratic), drives the gradient of the Moreau envelope to zero at the rate $O(k^{−1/4})$. This class of problems captures a variety of nonsmooth nonconvex formulations, now widespread in data science. As a consequence, we obtain the long-sought convergence rate of the standard projected stochastic gradient method for minimizing a smooth nonconvex function on a closed convex set. In the talk, I will also highlight other stochastic methods for which we can establish similar guarantees.