An extension of the Reformulation-Linearization Technique to nonlinear optimization

We introduce a novel Reformulation-Perspectification Technique (RPT) to obtain convex approximations of nonconvex continuous optimization problems. RPT consists of two steps, those are, a reformulation step and a perspectification step. The reformulation step generates redundant nonconvex constraints from pairwise multiplication of the existing constraints. The perspectification step then convexifies the nonconvex components by using perspective … Read more

Quantifying uncertainty with ensembles of surrogates for blackbox optimization

This work is in the context of blackbox optimization where the functions defining the problem are expensive to evaluate and where no derivatives are available. A tried and tested technique is to build surrogates of the objective and the constraints in order to conduct the optimization at a cheaper computational cost. This work proposes different … Read more

Inexact Sequential Quadratic Optimization for Minimizing a Stochastic Objective Function Subject to Deterministic Nonlinear Equality Constraints

An algorithm is proposed, analyzed, and tested experimentally for solving stochastic optimization problems in which the decision variables are constrained to satisfy equations defined by deterministic, smooth, and nonlinear functions. It is assumed that constraint function and derivative values can be computed, but that only stochastic approximations are available for the objective function and its … Read more

A framework for convex-constrained monotone nonlinear equations and its special cases

This work refers to methods for solving convex-constrained monotone nonlinear equations. We first propose a framework, which is obtained by combining a safeguard strategy on the search directions with a notion of approximate projections. The global convergence of the framework is established under appropriate assumptions and some examples of methods which fall into this framework … Read more

A study of Liu-Storey conjugate gradient methods for vector optimization

This work presents a study of Liu-Storey (LS) nonlinear conjugate gradient (CG) methods to solve vector optimization problems. Three variants of the LS-CG method originally designed to solve single-objective problems are extended to the vector setting. The first algorithm restricts the LS conjugate parameter to be nonnegative and use a sufficiently accurate line search satisfying … Read more

An Efficient Retraction Mapping for the Symplectic Stiefel Manifold

This article introduces a new retraction on the symplectic Stiefel manifold. The operation that requires the highest computational cost to compute the novel retraction is a matrix inversion of size $2p$–by–$2p$, which is much less expensive than those required for the available retractions in the literature. Later, with the new retraction, we design a constraint … Read more

A Fixed Point Approach with a New Solution Concept for Set-valued Optimization

We present a fixed point approach to find the whole solution set of a set-valued optimization problem though a parametric problem, in which the height of the level set of the objective function is regarded as the parameter. First, the solution concept based on the vector approach is considered in this method. Then, we propose … Read more

A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians

A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. The algorithmic structure of the proposed method is based on a step decomposition strategy that is known in the literature to be widely effective in practice, … Read more

A stochastic first-order trust-region method with inexact restoration for finite-sum minimization

We propose a stochastic first-order trust-region method with inexact function and gradient evaluations for solving finite-sum minimization problems. At each iteration, the function and the gradient are approximated by sampling. The sample size in gradient approximations is smaller than the sample size in function approximations and the latter is determined using a deterministic rule inspired … Read more

Frank-Wolfe and friends: a journey into projection-free first-order optimization methods

Invented some 65 years ago in a seminal paper by Marguerite Straus-Frank and Philip Wolfe, the Frank-Wolfe method recently enjoys a remarkable revival, fuelled by the need of fast and reliable first-order optimization methods in Data Science and other relevant application areas. This review tries to explain the success of this approach by illustrating versatility … Read more