Spectral Projected Subgradient Method for Nonsmooth Convex Optimization Problems

We consider constrained optimization problems with a nonsmooth objective function in the form of mathematical expectation. The Sample Average Approximation (SAA) is used to estimate the objective function and variable sample size strategy is employed. The proposed algorithm combines an SAA subgradient with the spectral coefficient in order to provide a suitable direction which improves … Read more

A filter sequential adaptive cubic regularisation algorithm for nonlinear constrained optimization

In this paper, we propose a filter sequential adaptive regularisation algorithm using cubics (ARC) for solving nonlinear equality constrained optimization. Similar to sequential quadratic programming methods, an ARC subproblem with linearized constraints is considered to obtain a trial step in each iteration. Composite step methods and reduced Hessian methods are employed to tackle the linearized … Read more

Condensed interior-point methods: porting reduced-space approaches on GPU hardware

The interior-point method (IPM) has become the workhorse method for nonlinear programming. The performance of IPM is directly related to the linear solver employed to factorize the Karush–Kuhn–Tucker (KKT) system at each iteration of the algorithm. When solving large-scale nonlinear problems, state-of-the art IPM solvers rely on efficient sparse linear solvers to solve the KKT … Read more

A branch-and-prune algorithm for discrete Nash equilibrium problems

We present a branch-and-prune procedure for discrete Nash equilibrium problems with a convex description of each player’s strategy set. The derived pruning criterion does not require player convexity, but only strict convexity of some player’s objective function in a single variable. If satisfied, it prunes choices for this variable by stating activity of certain constraints. … Read more

Convergence rates of the stochastic alternating algorithm for bi-objective optimization

Stochastic alternating algorithms for bi-objective optimization are considered when optimizing two conflicting functions for which optimization steps have to be applied separately for each function. Such algorithms consist of applying a certain number of steps of gradient or subgradient descent on each single objective at each iteration. In this paper, we show that stochastic alternating … Read more

On the Sparsity of Optimal Linear Decision Rules in Robust Optimization

We consider the widely-studied class of production-inventory problems with box uncertainty sets from the seminal work of Ben-Tal et al. (2004) on linear decision rules in robust optimization. We prove that there always exists an optimal linear decision rule for this class of problems in which the number of nonzero parameters in the linear decision … Read more

Dissolving Constraints for Riemannian Optimization

In this paper, we consider optimization problems over closed embedded submanifolds of $\mathbb{R}^n$, which are defined by the constraints $c(x) = 0$. We propose a class of constraint dissolving approaches for these Riemannian optimization problems. In these proposed approaches, solving a Riemannian optimization problem is transferred into the unconstrained minimization of a constraint dissolving function … Read more

An approximation algorithm for optimal piecewise linear approximations of bounded variable products

We investigate the optimal piecewise linear interpolation of the bivariate product xy over rectangular domains. More precisely, our aim is to minimize the number of simplices in the triangulation underlying the interpolation, while respecting a prescribed approximation error. First, we show how to construct optimal triangulations consisting of up to five simplices. Using these as … Read more

Convergence properties of an Objective-Function-Free Optimization regularization algorithm, including an $\mathcal{O}(\epsilon^{-3/2})$ complexity bound

An adaptive regularization algorithm for unconstrained nonconvex optimization is presented in which the objective function is never evaluated, but only derivatives are used. This algorithm belongs to the class of adaptive regularization methods, for which optimal worst-case complexity results are known for the standard framework where the objective function is evaluated. It is shown in … Read more

Riemannian Stochastic Proximal Gradient Methods for Nonsmooth Optimization over the Stiefel Manifold

Riemannian optimization has drawn a lot of attention due to its wide applications in practice. Riemannian stochastic first-order algorithms have been studied in the literature to solve large-scale machine learning problems over Riemannian manifolds. However, most of the existing Riemannian stochastic algorithms require the objective function to be differentiable, and they do not apply to … Read more