Constraint qualifications and strong global convergence properties of an augmented Lagrangian method on Riemannian manifolds

In the past years, augmented Lagrangian methods have been successfully applied to several classes of non-convex optimization problems, inspiring new developments in both theory and practice. In this paper we bring most of these recent developments from nonlinear programming to the context of optimization on Riemannian manifolds, including equality and inequality constraints. Many research have … Read more

Polyhedral Newton-min algorithms for complementarity problems

The semismooth Newton method is a very efficient approach for computing a zero of a large class of nonsmooth equations. When the initial iterate is sufficiently close to a regular zero and the function is strongly semismooth, the generated sequence converges quadratically to that zero, while the iteration only requires to solve a linear system. … Read more

A worst-case complexity analysis for Riemannian non-monotone line-search methods

In this paper we deal with non-monotone line-search methods to minimize a smooth cost function on a Riemannian manifold. In particular, we study the number of iterations necessary for this class of algorithms to obtain e-approximated stationary points. Specifically, we prove that under a regularity Lipschitz-type condition on the pullbacks of the cost function to … Read more

A Sequential Quadratic Programming Method for Optimization with Stochastic Objective Functions, Deterministic Inequality Constraints and Robust Subproblems

In this paper, a robust sequential quadratic programming method of Burke and Han (Math Programming, 1989)  for constrained optimization is generalized to problem with stochastic objective function, deterministic equality and inequality constraints. A stochastic line search scheme in Paquette and Scheinberg (SIOPT, 2020) is employed to globalize the steps. We show that in the case … Read more

A primal-dual majorization-minimization method for large-scale linear programs

We present a primal-dual majorization-minimization method for solving large-scale linear programs. A smooth barrier augmented Lagrangian (SBAL) function with strict convexity for the dual linear program is derived. The majorization-minimization approach is naturally introduced to develop the smoothness and convexity of the SBAL function. Our method only depends on a factorization of the constant matrix … Read more

A filter sequential adaptive cubic regularisation algorithm for nonlinear constrained optimization

In this paper, we propose a filter sequential adaptive regularisation algorithm using cubics (ARC) for solving nonlinear equality constrained optimization. Similar to sequential quadratic programming methods, an ARC subproblem with linearized constraints is considered to obtain a trial step in each iteration. Composite step methods and reduced Hessian methods are employed to tackle the linearized … Read more

Global convergence and acceleration of projection methods for feasibility problems involving union convex sets

We prove global convergence of classical projection algorithms for feasibility problems involving union convex sets, which refer to sets expressible as the union of a finite number of closed convex sets. We present a unified strategy for analyzing global convergence by means of studying fixed-point iterations of a set-valued operator that is the union of … Read more

An Adaptive Trust-Region Method Without Function Evaluations

In this paper we propose an adaptive trust-region method for smooth unconstrained optimization. The update rule for the trust-region radius relies only on gradient evaluations. Assuming that the gradient of the objective function is Lipschitz continuous, we establish worst-case complexity bounds for the number of gradient evaluations required by the proposed method to generate approximate … Read more

Two efficient gradient methods with approximately optimal stepsizes based on regularization models for unconstrained optimization

It is widely accepted that the stepsize is of great significance to gradient method. Two efficient gradient methods with approximately optimal stepsizes mainly based on regularization models are proposed for unconstrained optimization. More exactly, if the objective function is not close to a quadratic function on the line segment between the current and latest iterates, … Read more

A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization

The adaptive regularisation algorithm using cubics (ARC) is initially proposed for unconstrained optimization. ARC has excellent convergence properties and complexity. In this paper, we extend ARC to solve nonlinear equality constrained optimization and propose a sequential adaptive regularisation using cubics algorithm inspired by sequential quadratic programming (SQP) methods. In each iteration of our method, the … Read more