A Log-Barrier Newton-CG Method for Bound Constrained Optimization with Complexity Guarantees

We describe an algorithm based on a logarithmic barrier function, Newton’s method, and linear conjugate gradients, that obtains an approximate minimizer of a smooth function over the nonnegative orthant. We develop a bound on the complexity of the approach, stated in terms of the required accuracy and the cost of a single gradient evaluation of … Read more

Line search and convergence in bound-constrained optimization

The first part of this paper discusses convergence properties of a new line search method for the optimization of continuously differentiable functions with Lipschitz continuous gradient. The line search uses (apart from the gradient at the current best point) function values only. After deriving properties of the new, in general curved, line search, global convergence … Read more

Convexification of polynomial optimization problems by means of monomial patterns

Convexification is a core technique in global polynomial optimization. Currently, two different approaches compete in practice and in the literature. First, general approaches rooted in nonlinear programming. They are comparitively cheap from a computational point of view, but typically do not provide good (tight) relaxations with respect to bounds for the original problem. Second, approaches … Read more

A globally and linearly convergent PGM for zero-norm regularized quadratic optimization with sphere constraint

This paper is concerned with the zero-norm regularized quadratic optimization with a sphere constraint, which has an important application in sparse eigenvalue problems. For this class of nonconvex and nonsmooth optimization problems, we establish the KL property of exponent 1/2 for its extended-valued objective function and develop a globally and linearly convergent proximal gradient method … Read more

Steplength selection in gradient projection methods for box-constrained quadratic programs

The role of the steplength selection strategies in gradient methods has been widely investigated in the last decades. Starting from the work of Barzilai and Borwein (1988), many efficient steplength rules have been designed, that contributed to make the gradient approaches an effective tool for the large-scale optimization problems arising in important real-world applications. Most … Read more

Accelerating block coordinate descent methods with identification strategies

This work is about active set identification strategies aimed at accelerating block-coordinate descent methods (BCDM) applied to large-scale problems. We start by devising an identification function tailored for bound-constrained composite minimization together with an associated version of the BCDM, called Active BCDM, that is also globally convergent. The identification function gives rise to an efficient … Read more

Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization

Necessary conditions for high-order optimality in smooth nonlinear constrained optimization are explored and their inherent intricacy discussed. A two-phase minimization algorithm is proposed which can achieve approximate first-, second- and third-order criticality and its evaluation complexity is analyzed as a function of the choice (among existing methods) of an inner algorithm for solving subproblems in … Read more

Direct search based on probabilistic feasible descent for bound and linearly constrained problems

Direct search is a methodology for derivative-free optimization whose iterations are characterized by evaluating the objective function using a set of polling directions. In deterministic direct search applied to smooth objectives, these directions must somehow conform to the geometry of the feasible region and typically consist of positive generators of approximate tangent cones (which then … Read more

A Limited-Memory Quasi-Newton Algorithm for Bound-Constrained Nonsmooth Optimization

We consider the problem of minimizing a continuous function that may be nonsmooth and nonconvex, subject to bound constraints. We propose an algorithm that uses the L-BFGS quasi-Newton approximation of the problem’s curvature together with a variant of the weak Wolfe line search. The key ingredient of the method is an active-set selection strategy that … Read more

Unified approach for solving Box-Constrained models with continuous or discrete variables by Non monotonous Derivative Free Optimization techniques.

This paper describes a unified approach for solving Box-Constrained Optimization Problems (BCOP) in Euclidian spaces. The variables may be either continuous or discrete; in which case, they range on a grid of isolated points regularly spaced. For the continuous case, convergence is shown under standard assumptions; for the discrete case, slight modifications ensure that the … Read more