A Levenberg-Marquardt Method for Nonsmooth Regularized Least Squares

\(\) We develop a Levenberg-Marquardt method for minimizing the sum of a smooth nonlinear least-squares term \(f(x) = \frac{1}{2} \|F(x)\|_2^2\) and a nonsmooth term \(h\). Both \(f\) and \(h\) may be nonconvex. Steps are computed by minimizing the sum of a regularized linear least-squares model and a model of \(h\) using a first-order method such … Read more

A first-order augmented Lagrangian method for constrained minimax optimization

\(\) In this paper we study a class of constrained minimax problems. In particular, we propose a first-order augmented Lagrangian method for solving them, whose subproblems turn out to be a much simpler structured minimax problem and are suitably solved by a first-order method recently developed in [26] by the authors. Under some suitable assumptions, … Read more

First-order penalty methods for bilevel optimization

\(\) In this paper we study a class of unconstrained and constrained bilevel optimization problems in which the lower-level part is a convex optimization problem, while the upper-level part is possibly a nonconvex optimization problem. In particular, we propose penalty methods for solving them, whose subproblems turn out to be a structured minimax problem and … Read more

Inexact reduced gradient methods in nonconvex optimization

This paper proposes and develops new linesearch methods with inexact gradient information for finding stationary points of nonconvex continuously differentiable functions on finite-dimensional spaces. Some abstract convergence results for a broad class of linesearch methods are established. A general scheme for inexact reduced gradient (IRG) methods is proposed, where the errors in the gradient approximation … Read more

Superiorization: The asymmetric roles of feasibility-seeking and objective function reduction

The superiorization methodology can be thought of as lying conceptually between feasibility-seeking and constrained minimization. It is not trying to solve the full-fledged constrained minimization problem composed from the modeling constraints and the chosen objective function. Rather, the task is to find a feasible point which is “superior” (in a well-defined manner) with respect to … Read more

New subspace method for unconstrained derivative-free optimization

This paper defines an efficient subspace method, called SSDFO, for unconstrained derivative-free optimization problems where the gradients of the objective function are Lipschitz continuous but only exact function values are available. SSDFO employs line searches along directions constructed on the basis of quadratic models. These approximate the objective function in a subspace spanned by some … Read more

Globally linearly convergent nonlinear conjugate gradients without Wolfe line search

This paper introduces a new nonlinear conjugate gradient (CG) method using an efficient gradient-free line search method. Unless function values diverge to $-\infty$, global convergence to a stationary point is proved for continuously differentiable objective functions with Lipschitz continuous gradient, and global linear convergence if this stationary point is a strong local minimizer. The $n$-iterations … Read more

Second-order Partial Outer Convexification for Switched Dynamical Systems

Mixed-integer optimal control problems arise in many practical applications combining nonlinear, dynamic, and combinatorial features. To cope with the resulting complexity, several approaches have been suggested in the past. Some of them rely on solving a reformulated and relaxed control problem, referred to as partial outer convexification. Inspired by an efficient algorithm for switching time … Read more

On enhanced KKT optimality conditions for smooth nonlinear optimization

The Fritz-John (FJ) and KKT conditions are fundamental tools for characterizing minimizers and form the basis of almost all methods for constrained optimization. Since the seminal works of Fritz John, Karush, Kuhn and Tucker, FJ/KKT conditions have been enhanced by adding extra necessary conditions. Such an extension was initially proposed by Hestenes in the 1970s … Read more

CDOpt: A Python Package for a Class of Riemannian Optimization

Optimization over the embedded submanifold defined by constraints $c(x) = 0$ has attracted much interest over the past few decades due to its wide applications in various areas, including computer vision, signal processing, numerical linear algebra, and deep learning. Plenty of related optimization packages have been developed based on Riemannian optimization approaches, which rely on … Read more