A worst-case complexity analysis for Riemannian non-monotone line-search methods

In this paper we deal with non-monotone line-search methods to minimize a smooth cost function on a Riemannian manifold. In particular, we study the number of iterations necessary for this class of algorithms to obtain e-approximated stationary points. Specifically, we prove that under a regularity Lipschitz-type condition on the pullbacks of the cost function to … Read more

Sequential Quadratic Optimization for Stochastic Optimization with Deterministic Nonlinear Inequality and Equality Constraints

A sequential quadratic optimization algorithm for minimizing an objective function defined by an expectation subject to nonlinear inequality and equality constraints is proposed, analyzed, and tested. The context of interest is when it is tractable to evaluate constraint function and derivative values in each iteration, but it is intractable to evaluate the objective function or … Read more

Model-Based Derivative-Free Optimization Methods and Software

This thesis studies derivative-free optimization (DFO), particularly model-based methods and software. These methods are motivated by optimization problems for which it is impossible or prohibitively expensive to access the first-order information of the objective function and possibly the constraint functions. In particular, this thesis presents PDFO, a package we develop to provide both MATLAB and Python … Read more

A Sequential Quadratic Programming Method for Optimization with Stochastic Objective Functions, Deterministic Inequality Constraints and Robust Subproblems

In this paper, a robust sequential quadratic programming method of Burke and Han (Math Programming, 1989)  for constrained optimization is generalized to problem with stochastic objective function, deterministic equality and inequality constraints. A stochastic line search scheme in Paquette and Scheinberg (SIOPT, 2020) is employed to globalize the steps. We show that in the case … Read more

A Novel Stepsize for Gradient Descent Method

In this paper, we propose a novel stepsize for the classical gradient descent scheme to solve unconstrained nonlinear optimization problems. We are concerned with the convex and smooth objective without the globally Lipschitz gradient condition. Our new method just needs the locally Lipschitz gradient but still gets the rate $O(\frac{1}{k})$ of $f(x^k)-f_*$ at most. By … Read more

Minimizing the difference of convex and weakly convex functions via bundle method

We consider optimization problems with objective and constraint being the difference of convex and weakly convex functions. This framework covers a vast family of nonsmooth and nonconvex optimization problems, particularly those involving certain classes of composite and nonconvex value functions. We investigate several stationary conditions and extend the proximal bundle algorithm of [van Ackooij et … Read more

A Newton-CG based augmented Lagrangian method for finding a second-order stationary point of nonconvex equality constrained optimization with complexity guarantees

\(\) In this paper we consider finding a second-order stationary point (SOSP) of nonconvex equality constrained optimization when a nearly feasible point is known. In particular, we first propose a new Newton-CG method for finding an approximate SOSP of unconstrained optimization and show that it enjoys a substantially better complexity than the Newton-CG method [56]. … Read more

Strengthening SONC Relaxations with Constraints Derived from Variable Bounds

Nonnegativity certificates can be used to obtain tight dual bounds for polynomial optimization problems. Hierarchies of certificate-based relaxations ensure convergence to the global optimum, but higher levels of such hierarchies can become very computationally expensive, and the well-known sums of squares hierarchies scale poorly with the degree of the polynomials. This has motivated research into … Read more

A first-order augmented Lagrangian method for constrained minimax optimization

\(\) In this paper we study a class of constrained minimax problems. In particular, we propose a first-order augmented Lagrangian method for solving them, whose subproblems turn out to be a much simpler structured minimax problem and are suitably solved by a first-order method recently developed in [26] by the authors. Under some suitable assumptions, … Read more

First-order penalty methods for bilevel optimization

\(\) In this paper we study a class of unconstrained and constrained bilevel optimization problems in which the lower-level part is a convex optimization problem, while the upper-level part is possibly a nonconvex optimization problem. In particular, we propose penalty methods for solving them, whose subproblems turn out to be a structured minimax problem and … Read more