Second-order variational analysis and characterizations of tilt-stable optimal solutions in finite and infinite dimensions

The paper is devoted to developing second-order tools of variational analysis and their applications to characterizing tilt-stable local minimizers of constrained optimization problems in finite-dimensional and infinite-dimensional spaces. The importance of tilt stability has been well recognized from both theoretical and numerical aspects of optimization. Based on second-order generalized differentiation, we obtain qualitative and quantitative … Read more

Epi-convergent Smoothing with Applications to Convex Composite Functions

Smoothing methods have become part of the standard tool set for the study and solution of nondifferentiable and constrained optimization problems as well as a range of other variational and equilibrium problems. In this note we synthesize and extend recent results due to Beck and Teboulle on infimal convolution smoothing for convex functions with those … Read more

Primal-dual subgradient method for Huge-Scale Linear Conic Problems

In this paper we develop a {\em primal-dual} subgradient method for solving huge-scale Linear Conic Optimization Problems. Our main assumption is that the primal cone is formed as a direct product of many small-dimensional convex cones, and that the matrix $A$ of corresponding linear operator is {\em uniformly sparse}. In this case, our method can … Read more

Hankel Matrix Rank Minimization with Applications to System Identification and Realization

We introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz and moment structures, and catalog applications from diverse fields under this framework. We discuss various first-order methods for solving the resulting optimization problem, including alternating direction methods of multipliers, proximal point algorithms and gradient projection methods. We … Read more

Mean squared error minimization for inverse moment problems

We consider the problem of approximating the unknown density $u\in L^2(\Omega,\lambda)$ of a measure $\mu$ on $\Omega\subset\R^n$, absolutely continuous with respect to some given reference measure $\lambda$, from the only knowledge of finitely many moments of $\mu$. Given $d\in\N$ and moments of order $d$, we provide a polynomial $p_d$ which minimizes the mean square error … Read more

Fast global convergence of gradient methods for high-dimensional statistical recovery

Many statistical $M$-estimators are based on convex optimization problems formed by the combination of a data-dependent loss function with a norm-based regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a high-dimensional framework that allows the data dimension $\pdim$ to grow with (and possibly exceed) … Read more

Erratum to: On the set-semidefinite representation of nonconvex quadratic programs over arbitrary feasible sets” [Optim. Letters, 2012]

In this paper, an erratum is provided to the article “\emph{On the set-semidefinite representation of nonconvex quadratic programs over arbitrary feasible sets}”, published in Optim.\ Letters, 2012. Due to precise observation of the first author, it has been found that the proof of Lemma 9 has a nontrivial gap, and consequently the main result (Theorem … Read more

On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers

The formulation min f(x)+g(y) subject to Ax+By=b, where f and g are extended-value convex functions, arises in many application areas such as signal processing, imaging and image processing, statistics, and machine learning either naturally or after variable splitting. In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous … Read more

A Fair, Sequential Multiple Objective Optimization Algorithm

In multi-objective optimization the objective is to reach a point which is Pareto ecient. However we usually encounter many such points and choosing a point amongst them possesses another problem. In many applications we are required to choose a point having a good spread over all objective functions which is a direct consequence of the … Read more

A Newton’s method for the continuous quadratic knapsack problem

We introduce a new efficient method to solve the continuous quadratic knapsack problem. This is a highly structured quadratic program that appears in different contexts. The method converges after O(n) iterations with overall arithmetic complexity O(n²). Numerical experiments show that in practice the method converges in a small number of iterations with overall linear complexity, … Read more