Computing Tchebychev weight space decomposition for multiobjective discrete optimization problems

Multiobjective discrete optimization (MODO) techniques, including weight space decomposition, have received increasing attention in the last decade. The primary weight space decomposition technique in the literature is defined for the weighted sum utility function, through which sets of weights are assigned to a subset of the nondominated set. Recent work has begun to study the … Read more

Computing an enclosure for multiobjective mixed-integer nonconvex optimization problems using piecewise linear relaxations

In this paper, a new method for computing an enclosure of the nondominated set of multiobjective mixed-integer problems without any convexity requirements is presented. In fact, our criterion space method makes use of piecewise linear relaxations in order to bypass the nonconvexity of the original problem. The method chooses adaptively which level of relaxation is … Read more

A unified scheme for scalarization in set optimization

In this work, we propose a new scheme for scalarization in set optimization studied with the Kuroiwa set appoach. First, we define an abstract scalarizing function possessing properties such as global Lipschizity, sublinearity, cone monotonicity, cone representation property, cone interior representation property and uniform positivity. Next, we use this function to define the so called … Read more

A Reduced Jacobian Scheme with Full Convergence for Multicriteria Optimization

In this paper, we propose a variant of the reduced Jacobian method (RJM) introduced by El Maghri and Elboulqe in [JOTA, 179 (2018) 917–943] for multicriteria optimization under linear constraints. Motivation is that, contrarily to RJM which has only global convergence to Pareto KKT-stationary points in the classical sense of accumulation points, this new variant … Read more

A Proximal Gradient Method for Multi-objective Optimization Problems Using Bregman Functions

In this paper, a globally convergent proximal gradient method is developed for convex multi-objective optimization problems using Bregman distance. The proposed method is free from any kind of a priori chosen parameters or ordering information of objective functions. At every iteration of the proposed method, a subproblem is solved to find a descent direction. This … Read more

Integral Global Optimality Conditions and an Algorithm for Multiobjective Problems

In this work, we propose integral global optimality conditions for multiobjective problems not necessarily differentiable. The integral characterization, already known for single objective problems, are extended to multiobjective problems by weighted sum and Chebyshev weighted scalarizations. Using this last scalarization, we propose an algorithm for obtaining an approximation of the weak Pareto front whose effectiveness … Read more

Duality assertions in vector optimization w.r.t. relatively solid convex cones in real linear spaces

We derive duality assertions for vector optimization problems in real linear spaces based on a scalarization using recent results concerning the concept of relative solidness for convex cones (i.e., convex cones with nonempty intrinsic cores). In our paper, we consider an abstract vector optimization problem with generalized inequality constraints and investigate Lagrangian type duality assertions … Read more

Advancements in the computation of enclosures for multi-objective optimization problems

A central goal for multi-objective optimization problems is to compute their nondominated sets. In most cases these sets consist of infinitely many points and it is not a practical approach to compute them exactly. One solution to overcome this problem is to compute an enclosure, a special kind of coverage, of the nondominated set. One … Read more

Handling of constraints in multiobjective blackbox optimization

This work proposes the integration of two new constraint-handling approaches into the blackbox constrained multiobjective optimization algorithm DMulti-MADS, an extension of the Mesh Adaptive Direct Search (MADS) algorithm for single-objective constrained optimization. The constraints are aggregated into a single constraint violation function which is used either in a two-phase approach, where research of a feasible … Read more

Convergence rates of the stochastic alternating algorithm for bi-objective optimization

Stochastic alternating algorithms for bi-objective optimization are considered when optimizing two conflicting functions for which optimization steps have to be applied separately for each function. Such algorithms consist of applying a certain number of steps of gradient or subgradient descent on each single objective at each iteration. In this paper, we show that stochastic alternating … Read more