Efficiency of coordinate descent methods on huge-scale optimization problems

In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. … Read more

^phBcnorms, log-barriers and Cramer transform in optimization

We show that the Laplace approximation of a supremum by $L^p$-norms has interesting consequences in optimization. For instance, the logarithmic barrier functions (LBF) of a primal convex problem $P$ and its dual $P^*$ appear naturally when using this simple approximation technique for the value function $g$ of $P$ or its Legendre-Fenchel conjugate $g^*$. In addition, … Read more

Stability of error bounds for convex constraint systems in Banach spaces

This paper studies stability of error bounds for convex constraint systems in Banach spaces. We show that certain known sufficient conditions for local and global error bounds actually ensure error bounds for the family of functions being in a sense small perturbations of the given one. A single inequality as well as semi-infinite constraint systems … Read more

A Fast Algorithm for Total Variation Image Reconstruction from Random Projections

Total variation (TV) regularization is popular in image restoration and reconstruction due to its ability to preserve image edges. To date, most research activities on TV models concentrate on image restoration from blurry and noisy observations, while discussions on image reconstruction from random projections are relatively fewer. In this paper, we propose, analyze, and test … Read more

A Feasible Directions Method for Nonsmooth Convex Optimization

We propose a new technique for minimization of convex functions not necessarily smooth. Our approach employs an equivalent constrained optimization problem and approximated linear programs obtained with cutting planes. At each iteration a search direction and a step length are computed. If the step length is considered “non serious”, a cutting plane is added and … Read more

Convergence to the optimal value for barrier methods combined with Hessian Riemannian gradient flows and generalized proximal algorithms

We consider the problem $\min_{x\in\R^n}\{f(x)\mid Ax=b, \ x\in\overline{C},\ g_j(x)\le0,\ j=1,\ldots,s\}$, where $b\in\R^m$, $A\in\R^{m\times n}$ is a full rank matrix, $\overline{C}$ is the closure of a nonempty, open and convex subset $C$ of $\R^n$, and $g_j(\cdot)$, $ j=1,\ldots,s$, are nonlinear convex functions. Our strategy consists firstly in to introduce a barrier-type penalty for the constraints $g_j(x)\le0$, … Read more

Recovering low-rank and sparse components of matrices from incomplete and noisy observations

Many applications arising in a variety of fields can be well illustrated by the task of recovering the low-rank and sparse components of a given matrix. Recently, it is discovered that this NP-hard task can be well accomplished, both theoretically and numerically, via heuristically solving a convex relaxation problem where the widely-acknowledged nuclear norm and … Read more

Fast Multiple Splitting Algorithms for Convex Optimization

We present in this paper two different classes of general $K$-splitting algorithms for solving finite-dimensional convex optimization problems. Under the assumption that the function being minimized has a Lipschitz continuous gradient, we prove that the number of iterations needed by the first class of algorithms to obtain an $\epsilon$-optimal solution is $O(1/\epsilon)$. The algorithms in … Read more

Fast Alternating Linearization Methods for Minimizing the Sum of Two Convex Functions

We present in this paper first-order alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most $O(1/\epsilon)$ iterations to obtain an $\epsilon$-optimal solution, while our accelerated (i.e., fast) versions of them require at most $O(1/\sqrt{\epsilon})$ iterations, with little change in … Read more

A Sequential Quadratic Programming Algorithm for Nonconvex, Nonsmooth Constrained Optimization

We consider optimization problems with objective and constraint functions that may be nonconvex and nonsmooth. Problems of this type arise in important applications, many having solutions at points of nondifferentiability of the problem functions. We present a line search algorithm for situations when the objective and constraint functions are locally Lipschitz and continuously differentiable on … Read more