A Filter Active-Set Algorithm for Ball/Sphere Constrained Optimization Problem

In this paper, we propose a filter active-set algorithm for the minimization problem over a product of multiple ball/sphere constraints. By making effective use of the special structure of the ball/sphere constraints, a new limited memory BFGS (L-BFGS) scheme is presented. The new L-BFGS implementation takes advantage of the sparse structure of the Jacobian of … Read more

On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions

In this paper we present a variant of the proximal forward-backward splitting method for solving nonsmooth optimization problems in Hilbert spaces, when the objective function is the sum of two nondifferentiable convex functions. The proposed iteration, which will be call the Proximal Subgradient Splitting Method, extends the classical projected subgradient iteration for important classes of … Read more

A trust-region method for box-constrained nonlinear semidefinite programs

We propose a trust-region method for nonlinear semidefinite programs with box-constraints. The penalty barrier method can handle this problem, but the size of variable matrices available in practical time is restricted to be less than 500. We develop a trust-region method based on the approach of Coleman and Li (1996) that utilizes the distance to … Read more

Simple examples for the failure of Newton’s method with line search for strictly convex minimization

In this paper two simple examples of a twice continuously differentiable strictly convex function $f$ are presented for which Newton’s method with line search converges to a point where the gradient of $f$ is not zero. The first example uses a line search based on the Wolfe conditions. For the second example, some strictly convex … Read more

On the Global Optimality for Linear Constrained Rank Minimization Problem

The rank minimization with linear equality constraints has two closely related models, the low rank approximation model, that is to find the best rank-k approximation of a matrix satisfying the linear constraints, and its corresponding factorization model. The latter one is an unconstrained nonlinear least squares problem and hence enjoys a few fast first-order methods … Read more

Handling Nonpositive Curvature in a Limited Memory Steepest Descent Method

We propose a limited memory steepest descent (LMSD) method for solving unconstrained optimization problems. As a steepest descent method, the step computation in each iteration requires the evaluation of a gradient of the objective function and the calculation of a scalar step size only. When employed to solve certain convex problems, our method reduces to … Read more

Second order analysis of state-constrained control-affine problems

In this article we establish new second order necessary and sufficient optimality conditions for a class of control-affine problems with a scalar control and a scalar state constraint. These optimality conditions extend to the constrained state framework the Goh transform, which is the classical tool for obtaining an extension of the Legendre condition. We propose … Read more

HIPAD – A Hybrid Interior-Point Alternating Direction algorithm for knowledge-based SVM and feature selection

We consider classification tasks in the regime of scarce labeled training data in high dimensional feature space, where specific expert knowledge is also available. We propose a new hybrid optimization algorithm that solves the elastic-net support vector machine (SVM) through an alternating direction method of multipliers in the first phase, followed by an interior-point method … Read more

Globally Convergent Primal-Dual Active-Set Methods with Inexact Subproblem Solves

We propose primal-dual active-set (PDAS) methods for solving large-scale instances of an important class of convex quadratic optimization problems (QPs). The iterates of the algorithms are partitions of the index set of variables, where corresponding to each partition there exist unique primal-dual variables that can be obtained by solving a (reduced) linear system. Algorithms of … Read more

Nonlinear local error bounds via a change of metric

In this work, we improve the approach of Corvellec-Motreanu to nonlinear error bounds for lowersemicontinuous functions on complete metric spaces, an approach consisting in reducing the nonlinear case to the linear one through a change of metric. This improvement is basically a technical one, and allows dealing with local error bounds in an appropriate way. … Read more