« search calendars« DIMACS Workshop on ADMM and Proximal Splitting Methods in Optimization

« Selective Linearization for Multi-block Statistical Learning Problems

Selective Linearization for Multi-block Statistical Learning Problems

June 13, 2018, 9:30 AM - 10:00 AM



Rutgers University

CoRE Building

96 Frelinghuysen Road

Piscataway, NJ 08854

Click here for map.

Yu Du, University of Colorado, Denver

We consider the problem of minimizing a sum of several convex non-smooth functions. In this talk, we introduce  the selective linearization method, which iteratively linearizes all but one of the functions and employs simple proximal steps. The algorithm is a form of multiple operator splitting in which the order of processing partial functions is not fixed, but rather determined in the course of calculations. It proposes one of the first operator-splitting type methods which are globally convergent for an arbitrary number of operators without artificial duplication of variables. Global convergence is proved and estimates of the convergence rate are derived. Specifically, under a strong convexity condition, the number of iterations needed to achieve solution accuracy ε is of order O(ln(1/ε)/ε). We also study the optimal tuning parameters in the improvement test of the algorithm. We report results of extensive comparison experiments in statistical learning problems such as large-scale fused lasso regularization problem, overlapping group lasso problem and regularized support vector machine problem. The numerical results demonstrate the efficacy and accuracy of the method.


Slides     Video