SMOP: Stochastic trust region method for multi-objective problems

The problem considered is a multi-objective optimization problem, in which the goal is to find an optimal value of a vector function representing various criteria. The aim of this work is to develop an algorithm which utilizes the trust region framework with probabilistic model functions, able to cope with noisy problems, using inaccurate functions and … Read more

A new constant-rank-type condition related to MFCQ and local error bounds

Constraint qualifications (CQs) are fundamental for understanding the geometry of feasible sets and for ensuring the validity of optimality conditions in nonlinear programming. A known idea is that constant-rank type CQs allow one to modify the description feasible set, by eliminating redundant constraints, so that the Mangasarian-Fromovitz CQ (MFCQ) holds. Traditionally, such modifications, called reductions … Read more

Projected proximal gradient trust-region algorithm for nonsmooth optimization

We consider trust-region methods for solving optimization problems where the objective is the sum of a smooth, nonconvex function and a nonsmooth, convex regularizer. We extend the global convergence theory of such methods to include worst-case complexity bounds in the case of unbounded model Hessian growth, and introduce a new, simple nonsmooth trust-region subproblem solver … Read more

Smoothing l1-exact penalty method for intrinsically constrained Riemannian optimization problems

This paper deals with the Constrained Riemannian Optimization (CRO) problem, which involves minimizing a function subject to equality and inequality constraints on Riemannian manifolds. The study aims to advance optimization theory in the Riemannian setting by presenting and analyzing a penalty-type method for solving CRO problems. The proposed approach is based on techniques that involve … Read more

New Nonlinear Conjugate Gradient Methods with Guaranteed Descent for Multi-Objective Optimization

In this article, we present several examples of special nonlinear conjugate gradient directions for nonlinear (non-convex) multi-objective optimization. These directions provide a descent direction for the objectives, independent of the line-search. This way, we can provide an algorithm with simple, Armijo-like backtracking and prove convergence to first-order critical points. In contrast to other popular conjugate … Read more

Some Unified Theory for Variance Reduced Prox-Linear Methods

This work considers the nonconvex, nonsmooth problem of minimizing a composite objective of the form $f(g(x))+h(x)$ where the inner mapping $g$ is a smooth finite summation or expectation amenable to variance reduction. In such settings, prox-linear methods can enjoy variance-reduced speed-ups despite the existence of nonsmoothness. We provide a unified convergence theory applicable to a … Read more

Computing Weak Counterfactual Explanations for Linear Optimization: A New Class of Bilevel Models and a Tailored Penalty Alternating Direction Method

Explainable artificial intelligence is one of the most important trends in modern machine-learning research. The idea is to explain the outcome of a model by presenting a certain change in the input of the model so that the outcome changes significantly. In this paper, we study this question for linear optimization problems as an automated … Read more

Stochastic first-order methods with multi-extrapolated momentum for highly smooth unconstrained optimization

In this paper we consider an unconstrained stochastic optimization problem where the objective function exhibits a high order of smoothness. In particular, we propose a stochastic first-order method (SFOM) with multi-extrapolated momentum, in which multiple extrapolations are performed in each iteration, followed by a momentum step based on these extrapolations. We show that our proposed … Read more

Randomized Subspace Derivative-Free Optimization with Quadratic Models and Second-Order Convergence

We consider model-based derivative-free optimization (DFO) for large-scale problems, based on iterative minimization in random subspaces. We provide the first worst-case complexity bound for such methods for convergence to approximate second-order critical points, and show that these bounds have significantly improved dimension dependence compared to standard full-space methods, provided low accuracy solutions are desired and/or … Read more

Addressing Estimation Errors through Robust Portfolio Optimization

It is well known that the performance of the classical Markowitz model for portfolio optimization is extremely sensitive to estimation errors on the expected asset returns. Robust optimization mitigates this issue. We focus on ellipsoidal uncertainty sets around a point estimate of the expected asset returns. An important issue is the choice of the parameters … Read more