Submodular maximization of concave utility functions composed with a set-union operator with applications to maximal covering location problems

We study a family of discrete optimization problems asking for the maximization of the expected value of a concave, strictly increasing, and differentiable function composed with a set-union operator. The expected value is computed with respect to a set of coefficients taking values from a discrete set of scenarios. The function models the utility function … Read more

Using gradient directions to get global convergence of Newton-type methods

The renewed interest in Steepest Descent (SD) methods following the work of Barzilai and Borwein [IMA Journal of Numerical Analysis, 8 (1988)] has driven us to consider a globalization strategy based on SD, which is applicable to any line-search method. In particular, we combine Newton-type directions with scaled SD steps to have suitable descent directions. … Read more

An inexact scalarized proximal algorithm with quasi- distance for convex and quasiconvex multi-objective minimization

In the paper of Rocha et al., J Optim Theory Appl (2016) 171:964979, the authors introduced a proximal point algorithm with quasi-distances to solve unconstrained convex multi-objective minimization problems. They proved that all accumulation points are ecient solutions of the problem. In this pa- per we analyze an inexact proximal point algorithm to solve convex … Read more

On the convergence of augmented Lagrangian strategies for nonlinear programming

Augmented Lagrangian algorithms are very popular and successful methods for solving constrained optimization problems. Recently, the global convergence analysis of these methods have been dramatically improved by using the notion of the sequential optimality conditions. Such conditions are optimality conditions independently of the fulfilment of any constraint qualifications and provide theoretical tools to justify stopping … Read more

A numerical study of transformed mixed-integer optimal control problems

Time transformation is a ubiquitous tool in theoretical sciences, especially in physics. It can also be used to transform switched optimal con trol problems into control problems with a fixed switching order and purely continuous decisions. This approach is known either as enhanced time transformation, time-scaling, or switching time optimization (STO) for mixed-integer optimal control. … Read more

A proximal bundle variant with optimal iteration-complexity for a large range of prox stepsizes

This paper presents a proximal bundle variant, namely, the relaxed proximal bundle (RPB) method, for solving convex nonsmooth composite optimization problems. Like other proximal bundle variants, RPB solves a sequence of prox bundle subproblems whose objective functions are regularized composite cutting-plane models. Moreover, RPB uses a novel condition to decide whether to perform a serious … Read more

Projected-Search Methods for Bound-Constrained Optimization

Projected-search methods for bound-constrained minimization are based on performing a line search along a continuous piecewise-linear path obtained by projecting a search direction onto the feasible region. A potential benefit of a projected-search method is that many changes to the active set can be made at the cost of computing a single search direction. As … Read more

Implicit steepest descent algorithm for optimization with orthogonality constraints

Optimization with orthogonality constraints problems appear widely in applications from science and engineering. We address these types of problems from an numerical approach. Our new framework combines the steepest gradient descent using implicit information with and operator projection in order to construct a feasible sequence of points. In addition, we adopt an adaptive Barzilai–Borwein steplength … Read more

Variance Reduction of Stochastic Gradients Without Full Gradient Evaluation

A standard concept for reducing the variance of stochastic gradient approximations is based on full gradient evaluations every now and then. In this paper an approach is considered that — while approximating a local minimizer of a sum of functions — also generates approximations of the gradient and the function values without relying on full … Read more

Complexity iteration analysis for stongly convex multi-objective optimization using a Newton path-following procedure

In this note we consider the iteration complexity of solving strongly convex multi objective optimization. We discuss the precise meaning of this problem, and indicate it is loosely defined, but the most natural notion is to find a set of Pareto optimal points across a grid of scalarized problems. We derive that in most cases, … Read more