On the asymptotic convergence and acceleration of gradient methods

We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate … Read more

Gaining traction – On the convergence of an inner approximation scheme for probability maximization

We analyze an inner approximation scheme for probability maximization. The approach was proposed in Fabian, Csizmas, Drenyovszki, Van Ackooij, Vajnai, Kovacs, Szantai (2018) Probability maximization by inner approximation, Acta Polytechnica Hungarica 15:105-125, as an analogue of a classic dual approach in the handling of probabilistic constraints. Even a basic implementation of the maximization scheme proved … Read more

Relations Between Abs-Normal NLPs and MPCCs Part 2: Weak Constraint Qualifications

This work continues an ongoing effort to compare non-smooth optimization problems in abs-normal form to Mathematical Programs with Complementarity Constraints (MPCCs). We study general Nonlinear Programs with equality and inequality constraints in abs-normal form, so-called Abs-Normal NLPs, and their relation to equivalent MPCC reformulations. We introduce the concepts of Abadie’s and Guignard’s kink qualification and … Read more

Error Bounds and Singularity Degree in Semidefinite Programming

In semidefinite programming a proposed optimal solution may be quite poor in spite of having sufficiently small residual in the optimality conditions. This issue may be framed in terms of the discrepancy between forward error (the unmeasurable `true error’) and backward error (the measurable violation of optimality conditions). In his seminal work, Sturm provided an … Read more

A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints

Stochastic convex optimization problems with expectation constraints (SOECs) are encountered in statistics and machine learning, business, and engineering. In data-rich environments, the SOEC objective and constraints contain expectations defined with respect to large datasets. Therefore, efficient algorithms for solving such SOECs need to limit the fraction of data points that they use, which we refer … Read more

Robust stochastic optimization with the proximal point method

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. In this work, we show that a wide class of such algorithms on strongly convex problems can be augmented with sub-exponential confidence bounds at an overhead cost that is only … Read more

On Inexact Solution of Auxiliary Problems in Tensor Methods for Convex Optimization

In this paper we study the auxiliary problems that appear in p-order tensor methods for unconstrained minimization of convex functions with \nu-Holder continuous pth derivatives. This type of auxiliary problems corresponds to the minimization of a (p+\nu)-order regularization of the pth order Taylor approximation of the objective. For the case p=3, we consider the use … Read more

Dual sufficient characterizations of transversality properties

This paper continues the study of ‘good arrangements’ of collections of sets near a point in their intersection. Our aim is to develop a general scheme for quantitative analysis of several transversality properties within the same framework. We consider a general nonlinear setting and establish dual space (subdifferential and normal cone) sufficient characterizations of transversality … Read more

A simple Newton method for local nonsmooth optimization

Superlinear convergence has been an elusive goal for black-box nonsmooth optimization. Even in the convex case, the subgradient method is very slow, and while some cutting plane algorithms, including traditional bundle methods, are popular in practice, local convergence is still sluggish. Faster variants depend either on problem structure or on analyses that elide sequences of … Read more

Relations Between Abs-Normal NLPs and MPCCs Part 1: Strong Constraint Qualifications

This work is part of an ongoing effort of comparing non-smooth optimization problems in abs-normal form to MPCCs. We study the general abs-normal NLP with equality and inequality constraints in relation to an equivalent MPCC reformulation. We show that kink qualifications and MPCC constraint qualifications of linear independence type and Mangasarian-Fromovitz type are equivalent. Then … Read more