An Augmented Lagrangian Approach to Bi-Level Optimization via an Equilibrium Constrained Problem

Optimization problems involving equilibrium constraints capture diverse optimization settings such as bi-level optimization, min-max problems and games, and the minimization over non-linear constraints. This paper introduces an Augmented Lagrangian approach with Hessian-vector product approximation to address an equilibrium constrained nonconvex nonsmooth optimization problem. The underlying model in particular captures various settings of bi-level optimization problems, … Read more

A necessary condition for the guarantee of the superiorization method

We study a method that involves principally convex feasibility-seeking and makes secondary efforts of objective function value reduction. This is the well-known superiorization method (SM), where the iterates of an asymptotically convergent iterative feasibility-seeking algorithm are perturbed by objective function nonascent steps. We investigate the question under what conditions a sequence generated by an SM … Read more

A stochastic Lagrangian-based method for nonconvex optimization with nonlinear constraints

The Augmented Lagrangian Method (ALM) is one of the most common approaches for solving linear and nonlinear constrained problems. However, for non-convex objectives, handling non-linear inequality constraints remains challenging. In this paper, we propose a stochastic ALM with Backtracking Line Search that performs on a subset (mini-batch) of randomly selected points for the solving of … Read more

A class of diagonal quasi-Newton penalty decomposition algorithms for sparse bound-constrained nonconvex optimization

This paper discusses an improved quasi-Newton penalty decomposition algorithm for the cardinality bound-constrained optimization problems whose simple bounds on the variables are assumed to be finite. Until an approximate stationary point is found, this algorithm approximates the solutions of a sequence of penalty subproblems by a two-block decomposition scheme. This scheme finds an approximate solution … Read more

Smoothing l1-exact penalty method for intrinsically constrained Riemannian optimization problems

This paper deals with the Constrained Riemannian Optimization (CRO) problem, which involves minimizing a function subject to equality and inequality constraints on Riemannian manifolds. The study aims to advance optimization theory in the Riemannian setting by presenting and analyzing a penalty-type method for solving CRO problems. The proposed approach is based on techniques that involve … Read more

Sparse Polynomial Matrix Optimization

A polynomial matrix inequality is a statement that a symmetric polynomial matrix is positive semidefinite over a given constraint set. Polynomial matrix optimization concerns minimizing the smallest eigenvalue of a symmetric polynomial matrix subject to a tuple of polynomial matrix inequalities. This work explores the use of sparsity methods in reducing the complexity of sum-of-squares … Read more

Decision-focused predictions via pessimistic bilevel optimization: complexity and algorithms

Dealing with uncertainty in optimization parameters is an important and longstanding challenge. Typically, uncertain parameters are predicted accurately, and then a deterministic optimization problem is solved. However, the decisions produced by this so-called predict-then-optimize procedure can be highly sensitive to uncertain parameters. In this work, we contribute to recent efforts in producing decision-focused predictions, i.e., to … Read more

A Trust-Region Algorithm for Noisy Equality Constrained Optimization

This paper introduces a modified Byrd-Omojokun (BO) trust region algorithm to address the challenges posed by noisy function and gradient evaluations. The original BO method was designed to solve equality constrained problems and it forms the backbone of some interior point methods for general large-scale constrained optimization. A key strength of the BO method is … Read more

A Line Search Filter Sequential Adaptive Cubic Regularisation Algorithm for Nonlinearly Constrained Optimization

In this paper, a sequential adaptive regularization algorithm using cubics (ARC) is presented to solve nonlinear equality constrained optimization. It is motivated by the idea of handling constraints in sequential quadratic programming methods. In each iteration, we decompose the new step into the sum of the normal step and the tangential step by using composite … Read more

The Augmented Factorization Bound for Maximum-Entropy Sampling

The maximum-entropy sampling problem (MESP) aims to select the most informative principal submatrix of a prespecified size from a given covariance matrix. This paper proposes an augmented factorization bound for MESP based on concave relaxation. By leveraging majorization and Schur-concavity theory, we demonstrate that this new bound dominates the classic factorization bound of Nikolov (2015) and a recent … Read more