A Proximal-Gradient Method for Constrained Optimization

We present a new algorithm for solving optimization problems with objective functions that are the sum of a smooth function and a (potentially) nonsmooth regularization function, and nonlinear equality constraints. The algorithm may be viewed as an extension of the well-known proximal-gradient method that is applicable when constraints are not present. To account for nonlinear equality constraints, we combine a decomposition procedure for computing trial steps with an exact merit function for determining trial step acceptance. Under common assumptions, we show that both the proximal parameter and merit function parameter eventually remain fixed, and then prove a worst-case complexity result for the maximum number of iterations before an iterate satisfying approximate first-order optimality conditions for a given tolerance is computed. Our preliminary numerical results indicate that our approach has great promise, especially in terms of returning approximate solutions that are structured (e.g., sparse solutions when a one-norm regularizer is used).

Article

Download

View A Proximal-Gradient Method for Constrained Optimization