Adaptive Algorithms for Robust Phase Retrieval

This paper considers the robust phase retrieval, which can be cast as a nonsmooth and nonconvex optimization problem. We propose two first-order algorithms with adaptive step sizes: the subgradient algorithm (AdaSubGrad) and the inexact proximal linear algorithm (AdaIPL). Our contribution lies in the novel design of adaptive step sizes based on quantiles of the absolute … Read more

Lipschitz minimization and the Goldstein modulus

Goldstein’s 1977 idealized iteration for minimizing a Lipschitz objective fixes a distance – the step size – and relies on a certain approximate subgradient. That “Goldstein subgradient” is the shortest convex combination of objective gradients at points within that distance of the current iterate. A recent implementable Goldstein-style algorithm allows a remarkable complexity analysis (Zhang … Read more

Statistical performance of subgradient step-size update rules in Lagrangian relaxations of chance-constrained optimization models

Published in Lecture Notes in Computer Science. https://doi.org/10.1007/978-3-031-47859-8_26 Lagrangian relaxation schemes, coupled with a subgradient procedure, are frequently employed to solve chance-constrained optimization models. The subgradient procedure typically relies on a step-size update rule. Although there is extensive research on the properties of these step-size update rules, there is little consensus on which rules are … Read more

Spectral Projected Subgradient Method for Nonsmooth Convex Optimization Problems

We consider constrained optimization problems with a nonsmooth objective function in the form of mathematical expectation. The Sample Average Approximation (SAA) is used to estimate the objective function and variable sample size strategy is employed. The proposed algorithm combines an SAA subgradient with the spectral coefficient in order to provide a suitable direction which improves … Read more

An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems

We study unconstrained optimization problems with nonsmooth and convex objective function in the form of a mathematical expectation. The proposed method approximates the expected objective function with a sample average function using Inexact Restoration-based adapted sample sizes. The sample size is chosen in an adaptive manner based on Inexact Restoration. The algorithm uses line search … Read more

Data-compatibility of algorithms

The data-compatibility approach to constrained optimization, proposed here, strives to a point that is “close enough” to the solution set and whose target function value is “close enough” to the constrained minimum value. These notions can replace analysis of asymptotic convergence to a solution point of infinite sequences generated by specific algorithms. We consider a … Read more

Optimality, identifiability, and sensitivity

Around a solution of an optimization problem, an “identifiable” subset of the feasible region is one containing all nearby solutions after small perturbations to the problem. A quest for only the most essential ingredients of sensitivity analysis leads us to consider identifiable sets that are “minimal”. This new notion lays a broad and intuitive variational-analytic … Read more

A Proximal Cutting Plane Method Using Chebychev Center for Nonsmooth Convex Optimization

An algorithm is developed for minimizing nonsmooth convex functions. This algorithm extends Elzinga-Moore cutting plane algorithm by enforcing the search of the next test point not too far from the previous ones, thus removing compactness assumption. Our method is to Elzinga-Moore’s algorithm what a proximal bundle method is to Kelley’s algorithm. Instead of lower approximations … Read more

Variational Analysis of Functions of the Roots of Polynomials

The Gauss-Lucas Theorem on the roots of polynomials nicely simplifies calculating the subderivative and regular subdifferential of the abscissa mapping on polynomials (the maximum of the real parts of the roots). This paper extends this approach to more general functions of the roots. By combining the Gauss-Lucas methodology with an analysis of the splitting behavior … Read more

Lagrangian relaxation

Lagrangian relaxation is a tool to find upper bounds on a given (arbitrary) maximization problem. Sometimes, the bound is exact and an optimal solution is found. Our aim in this paper is to review this technique, the theory behind it, its numerical aspects, its relation with other techniques such as column generation. Citationin: Computational Combinatorial … Read more