We develop two new proximal alternating penalty algorithms to solve a wide range class of constrained convex optimization problems. Our approach mainly relies on a novel combination of the classical quadratic penalty, alternating, Nesterov’s acceleration, and homotopy techniques. The first algorithm is designed to solve generic and possibly nonsmooth constrained convex problems without requiring any Lipschitz gradient continuity or strong convexity, while achieves the best-known $\BigO{1/k}$-convergence rate in the non-ergodic sense, where $k$ is the iteration counter. The second algorithm is also designed to solve non-strongly convex problems, but with one strongly convex objective term. This algorithm achieves the $\BigO{1/k^2}$-convergence rate on the primal constrained problem. Such a rate is obtained in two cases: (i)~averaging only on the iterate sequence of the strongly convex term, or (ii) using two proximal operators of this term without averaging. In both algorithms, we allow one to linearize the second subproblem to use the proximal operator of the corresponding objective term. Then, we customize our methods to solve different convex problems, and lead to new variants. As a byproduct, these algorithms preserve the same convergence guarantees as in our main algorithms. Finally, we verify our theoretical development via different numerical examples and compare our methods with some existing state-of-the-art algorithms.
Citation
manuscript, nov 2017 (STOR-UNC)