Sequential Threshold Control in Descent Splitting Methods for Decomposable Optimization Problems

We suggest a modification of the descent splitting methods for decomposable composite optimization problems, which maintains the basic convergence properties, but enables one to reduce the computational expenses per iteration and to provide computations in a distributed manner. It consists in making coordinate-wise steps together with a special threshold control. CitationKazan Federal University, Kazan 420008, … Read more

Semivectorial Bilevel Optimization on Riemannian Manifolds

In this paper we deal with the semivectorial bilevel problem in the Riemannian setting. The upper level is a scalar optimization problem to be solved by the leader, and the lower level is a multiobjective optimization problem to be solved by several followers acting in a cooperative way inside the greatest coalition and choosing among … Read more

On the optimal order of worst case complexity of direct search

The worst case complexity of direct-search methods has been recently analyzed when they use positive spanning sets and impose a sufficient decrease condition to accept new iterates. Assuming that the objective function is smooth, it is now known that such methods require at most O(n^2 epsilon^{-2}) function evaluations to compute a gradient of norm below … Read more

A Filter Active-Set Algorithm for Ball/Sphere Constrained Optimization Problem

In this paper, we propose a filter active-set algorithm for the minimization problem over a product of multiple ball/sphere constraints. By making effective use of the special structure of the ball/sphere constraints, a new limited memory BFGS (L-BFGS) scheme is presented. The new L-BFGS implementation takes advantage of the sparse structure of the Jacobian of … Read more

On proximal subgradient splitting method for minimizing the sum of two nonsmooth convex functions

In this paper we present a variant of the proximal forward-backward splitting method for solving nonsmooth optimization problems in Hilbert spaces, when the objective function is the sum of two nondifferentiable convex functions. The proposed iteration, which will be call the Proximal Subgradient Splitting Method, extends the classical projected subgradient iteration for important classes of … Read more

A trust-region method for box-constrained nonlinear semidefinite programs

We propose a trust-region method for nonlinear semidefinite programs with box-constraints. The penalty barrier method can handle this problem, but the size of variable matrices available in practical time is restricted to be less than 500. We develop a trust-region method based on the approach of Coleman and Li (1996) that utilizes the distance to … Read more

Simple examples for the failure of Newton’s method with line search for strictly convex minimization

In this paper two simple examples of a twice continuously differentiable strictly convex function $f$ are presented for which Newton’s method with line search converges to a point where the gradient of $f$ is not zero. The first example uses a line search based on the Wolfe conditions. For the second example, some strictly convex … Read more

On the Global Optimality for Linear Constrained Rank Minimization Problem

The rank minimization with linear equality constraints has two closely related models, the low rank approximation model, that is to find the best rank-k approximation of a matrix satisfying the linear constraints, and its corresponding factorization model. The latter one is an unconstrained nonlinear least squares problem and hence enjoys a few fast first-order methods … Read more

Second order analysis of state-constrained control-affine problems

In this article we establish new second order necessary and sufficient optimality conditions for a class of control-affine problems with a scalar control and a scalar state constraint. These optimality conditions extend to the constrained state framework the Goh transform, which is the classical tool for obtaining an extension of the Legendre condition. We propose … Read more

Handling Nonpositive Curvature in a Limited Memory Steepest Descent Method

We propose a limited memory steepest descent (LMSD) method for solving unconstrained optimization problems. As a steepest descent method, the step computation in each iteration requires the evaluation of a gradient of the objective function and the calculation of a scalar step size only. When employed to solve certain convex problems, our method reduces to … Read more