« search calendars« DIMACS Workshop on ADMM and Proximal Splitting Methods in Optimization

« Relaxed Inertial Proximal Algorithms for Monotone Inclusions

Relaxed Inertial Proximal Algorithms for Monotone Inclusions

June 12, 2018, 9:40 AM - 10:10 AM

Location:

DIMACS Center

Rutgers University

CoRE Building

96 Frelinghuysen Road

Piscataway, NJ 08854

Click here for map.

Hedy Attouch, University of Montpellier

To meet the challenges of large scale optimization, considerable effort has been devoted in recent years to the study of first-order splitting algorithms and their acceleration. In this lecture, based on recent works with  A. Cabot cite{AC1, AC2} and J. Peypouquet cite{AP1}, we address these issues for the resolution of general (structured) monotone inclusions. In doing so, we aim to provide a unified approach to solving, by rapid methods, convex optimization problems, convex-concave saddle value problems, and finding fixed points of nonexpansive mappings. Our approach is based on the link between dissipative dynamic systems and optimization algorithms.

Given $mathcal H$  a real Hilbert space, and $A: mathcal H rightarrow 2^{mathcal H}$, a  maximally monotone operator, we first study the asymptotic behavior, as time goes to $+infty$, of  the trajectories of the second-order differential equation $$

ddot{x}(t)   + frac{alpha}{t}  dot{x}(t) +A_{lambda(t)}(x(t)) = 0, qquad t>t_0>0,

$$ where $alpha $ is a positive damping parameter, and $A_lambda$ is the Yosida regularization of $A$ of index $lambda>0$. The Lipschitz continuity of the Yosida approximation makes the associated Cauchy problem well-posed. A proper tuning of the Yosida parameter $lambda(t)$ and of the damping parameter $alpha$ allows us to prove the weak convergence of the trajectories to zeroes of the operator $A$. By means of a convenient finite-difference time discretization of this differential equation, we introduce a new {it Relaxed Inertial Proximal Algorithm}  for which similar convergence properties hold.

When $A$ is the subdifferential of a closed convex function, we obtain fast convergence of the values, in line with the accelerated method of Nesterov.

Next, we extend this study to the Relaxed Inertial Proximal algorithm Algorithm with general inertial coefficient $alpha_k$, and  relaxation coefficient $rho_k$

{y_k=x_k+alpha_k(x_k-x_{k-1})

x_{k+1}=(1-rho_k)y_k + rho_k J_{mu_k A}(y_k),

where $J_{mu A} = left( I + mu A right)^{-1} $ is the {it resolvent} of $A$ with index $mu>0$.

We obtain convergence results under conditions involving only the coefficients $alpha_k$,  $rho_k$, $mu_k$ jointly.

Then, we  consider structured monotone problems governed by the sum of a prox-friendly maximally monotone operator and a cocoercive operator. We study a Relaxed Inertial Forward-Backward algorithm (RIFB) for which we obtain similar convergence results. Finally, we introduce an inertial proximal ADMM algorithm to solve convex structured minimization problems with linear constraint. Numerical experiments confirm the effectiveness of the algorithm.

 

Slides     Video