An algorithm for optimization with disjoint linear constraints and its application for predicting rain

A specialized algorithm for quadratic optimization (QO, or, formerly, QP) with disjoint linear constraints is presented. In the considered class of problems, a subset of variables are subject to linear equality constraints, while variables in a different subset are constrained to remain in a convex set. The proposed algorithm exploits the structure by combining steps … Read more

An Average Curvature Accelerated Composite Gradient Method for Nonconvex Smooth Composite Optimization Problems

This paper presents an accelerated composite gradient (ACG) variant, referred to as the AC-ACG method, for solving nonconvex smooth composite minimization problems. As opposed to well-known ACG variants that are either based on a known Lipschitz gradient constant or a sequence of maximum observed curvatures, the current one is based on a sequence of average … Read more

Tight compact extended relaxations for nonconvex quadratic programming problems with box constraints

Cutting planes from the Boolean Quadric Polytope (BQP) can be used to reduce the optimality gap of the NP-hard nonconvex quadratic program with box constraints (BoxQP). It is known that all cuts of the Chvátal-Gomory closure of the BQP are A-odd cycle inequalities. We obtain a compact extended relaxation of all A-odd cycle inequalities, which … Read more

Simultaneous iterative solutions for the trust-region and minimum eigenvalue subproblem

Given the inability to foresee all possible scenarios, it is justified to desire an efficient trust-region subproblem solver capable of delivering any desired level of accuracy on demand; that is, the accuracy obtainable for a given trust-region subproblem should not be partially dependent on the problem itself. Current state-of-the-art iterative eigensolvers all fall into the … Read more

Derivative-Free Superiorization: Principle and Algorithm

The superiorization methodology is intended to work with input data of constrained minimization problems, that is, a target function and a set of constraints. However, it is based on an antipodal way of thinking to what leads to constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to … Read more

Representation of the Pareto front for heterogeneous multi-objective optimization

Optimization problems with multiple objectives which are expensive, i.e. where function evaluations are time consuming, are difficult to solve. Finding at least one locally optimal solution is already a difficult task. In case only one of the objective functions is expensive while the others are cheap, for instance analytically given, this can be used in … Read more

Weak sharpness and finite termination for variational inequalities on Hadamard manifolds

We first introduce the notion of weak sharpness for the solution sets of variational inequality problems (in short, VIP) on Hadamard spaces. We then study the finite convergence property of sequences generated by the inexact proximal point algorithm with different error terms for solving VIP under weak sharpness of the solution set. We also give … Read more

On the asymptotic convergence and acceleration of gradient methods

We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate … Read more

Methods for multiobjective bilevel optimization

This paper is on multiobjective bilevel optimization, i.e. on bilevel optimization problems with multiple objectives on the lower or on the upper level, or even on both levels. We give an overview on the major optimality notions used in multiobjective optimization. We provide characterization results for the set of optimal solutions of multiobjective optimization problems … Read more

On the use of polynomial models in multiobjective directional direct search

Polynomial interpolation or regression models are an important tool in Derivative-free Optimization, acting as surrogates of the real function. In this work, we propose the use of these models in the multiobjective framework of directional direct search, namely the one of Direct Multisearch. Previously evaluated points are used to build quadratic polynomial models, which are … Read more