Improved Front Steepest Descent for Multi-objective Optimization

In this paper, we deal with the Front Steepest Descent algorithm for multi-objective optimization. We point out that the algorithm from the literature is often incapable, by design, of spanning large portions of the Pareto front. We thus introduce some modifications within the algorithm aimed to overcome this significant limitation. We prove that the asymptotic … Read more

A Newton-CG based augmented Lagrangian method for finding a second-order stationary point of nonconvex equality constrained optimization with complexity guarantees

In this paper we consider finding a second-order stationary point (SOSP) of nonconvex equality constrained optimization when a nearly feasible point is known. In particular, we first propose a new Newton-CG method for finding an approximate SOSP of unconstrained optimization and show that it enjoys a substantially better complexity than the Newton-CG method [56]. We … Read more

Strengthening SONC Relaxations with Constraints Derived from Variable Bounds

Nonnegativity certificates can be used to obtain tight dual bounds for polynomial optimization problems. Hierarchies of certificate-based relaxations ensure convergence to the global optimum, but higher levels of such hierarchies can become very computationally expensive, and the well-known sums of squares hierarchies scale poorly with the degree of the polynomials. This has motivated research into … Read more

A Levenberg-Marquardt Method for Nonsmooth Regularized Least Squares

We develop a Levenberg-Marquardt method for minimizing the sum of a smooth nonlinear least-squares term \(f(x) = \frac{1}{2} \|F(x)\|_2^2\) and a nonsmooth term \(h\). Both \(f\) and \(h\) may be nonconvex. Steps are computed by minimizing the sum of a regularized linear least-squares model and a model of \(h\) using a first-order method such as … Read more

A first-order augmented Lagrangian method for constrained minimax optimization

In this paper we study a class of constrained minimax problems. In particular, we propose a first-order augmented Lagrangian method for solving them, whose subproblems turn out to be a much simpler structured minimax problem and are suitably solved by a first-order method recently developed in [26] by the authors. Under some suitable assumptions, an … Read more

First-order penalty methods for bilevel optimization

In this paper we study a class of unconstrained and constrained bilevel optimization problems in which the lower-level part is a convex optimization problem, while the upper-level part is possibly a nonconvex optimization problem. In particular, we propose penalty methods for solving them, whose subproblems turn out to be a structured minimax problem and are … Read more

Inexact reduced gradient methods in nonconvex optimization

This paper proposes and develops new linesearch methods with inexact gradient information for finding stationary points of nonconvex continuously differentiable functions on finite-dimensional spaces. Some abstract convergence results for a broad class of linesearch methods are established. A general scheme for inexact reduced gradient (IRG) methods is proposed, where the errors in the gradient approximation … Read more

Superiorization: The asymmetric roles of feasibility-seeking and objective function reduction

The superiorization methodology can be thought of as lying conceptually between feasibility-seeking and constrained minimization. It is not trying to solve the full-fledged constrained minimization problem composed from the modeling constraints and the chosen objective function. Rather, the task is to find a feasible point which is “superior” (in a well-defined manner) with respect to … Read more

New subspace method for unconstrained derivative-free optimization

This paper defines an efficient subspace method, called SSDFO, for unconstrained derivative-free optimization problems where the gradients of the objective function are Lipschitz continuous but only exact function values are available. SSDFO employs line searches along directions constructed on the basis of quadratic models. These approximate the objective function in a subspace spanned by some … Read more

Globally linearly convergent nonlinear conjugate gradients without Wolfe line search

This paper introduces a new nonlinear conjugate gradient (CG) method using an efficient gradient-free line search method. Unless function values diverge to $-\infty$, global convergence to a stationary point is proved for continuously differentiable objective functions with Lipschitz continuous gradient, and global linear convergence if this stationary point is a strong local minimizer. The $n$-iterations … Read more