A Block Coordinate Descent Method for Regularized Multi-Convex Optimization with Applications to Nonnegative Tensor Factorization and Completion

This paper considers regularized block multi-convex optimization, where the feasible set and objective function are generally non-convex but convex in each block of variables. We review some of its interesting examples and propose a generalized block coordinate descent method. (Using proximal updates, we further allow non-convexity over some blocks.) Under certain conditions, we show that … Read more

CHARACTERIZATIONS OF FULL STABILITY IN CONSTRAINED OPTIMIZATION

This paper is mainly devoted to the study of the so-called full Lipschitzian stability of local solutions to finite-dimensional parameterized problems of constrained optimization, which has been well recognized as a very important property from both viewpoints of optimization theory and its applications. Based on second- order generalized differential tools of variational analysis, we obtain … Read more

Some criteria for error bounds in set optimization

We obtain sufficient and/or necessary conditions for global/local error bounds for the distances to some sets appeared in set optimization studied with both the set approach and vector approach (sublevel sets, constraint sets, sets of {\it all } Pareto efficient/ Henig proper efficient/super efficient solutions, sets of solutions {\it corresponding to one} Pareto efficient/Henig proper … Read more

An adaptive accelerated first-order method for convex optimization

This paper presents a new accelerated variant of Nesterov’s method for solving composite convex optimization problems in which certain acceleration parameters are adaptively (and aggressively) chosen so as to substantially improve its practical performance compared to existing accelerated variants while at the same time preserve the optimal iteration-complexity shared by these methods. Computational results are … Read more

Complexity Analysis of Interior Point Algorithms for Non-Lipschitz and Nonconvex Minimization

We propose a first order interior point algorithm for a class of non-Lipschitz and nonconvex minimization problems with box constraints, which arise from applications in variable selection and regularized optimization. The objective functions of these problems are continuously differentiable typically at interior points of the feasible set. Our algorithm is easy to implement and the … Read more

Optimality, identifiability, and sensitivity

Around a solution of an optimization problem, an “identifiable” subset of the feasible region is one containing all nearby solutions after small perturbations to the problem. A quest for only the most essential ingredients of sensitivity analysis leads us to consider identifiable sets that are “minimal”. This new notion lays a broad and intuitive variational-analytic … Read more

Optimality conditions for the nonlinear programming problems on Riemannian manifolds

In recent years, many traditional optimization methods have been successfully generalized to minimize objective functions on manifolds. In this paper, we first extend the general traditional constrained optimization problem to a nonlinear programming problem built upon a general Riemannian manifold $\mathcal{M}$, and discuss the first-order and the second-order optimality conditions. By exploiting the differential geometry … Read more

A quasi-Newton proximal splitting method

A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration … Read more

Sparse Approximation via Penalty Decomposition Methods

In this paper we consider sparse approximation problems, that is, general $l_0$ minimization problems with the $l_0$-“norm” of a vector being a part of constraints or objective function. In particular, we first study the first-order optimality conditions for these problems. We then propose penalty decomposition (PD) methods for solving them in which a sequence of … Read more

Tilt stability, uniform quadratic growth, and strong metric regularity of the subdifferential.

We prove that uniform second order growth, tilt stability, and strong metric regularity of the subdifferential — three notions that have appeared in entirely different settings — are all essentially equivalent for any lower-semicontinuous, extended-real-valued function. CitationCornell University, School of Operations Research and Information Engineering, 206 Rhodes Hall Cornell University Ithaca, NY 14853. May 2012.ArticleDownload … Read more