On the asymptotic convergence and acceleration of gradient methods

We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate … Read more

Gaining traction – On the convergence of an inner approximation scheme for probability maximization

We analyze an inner approximation scheme for probability maximization. The approach was proposed in Fabian, Csizmas, Drenyovszki, Van Ackooij, Vajnai, Kovacs, Szantai (2018) Probability maximization by inner approximation, Acta Polytechnica Hungarica 15:105-125, as an analogue of a classic dual approach in the handling of probabilistic constraints. Even a basic implementation of the maximization scheme proved … Read more

Error Bounds and Singularity Degree in Semidefinite Programming

In semidefinite programming a proposed optimal solution may be quite poor in spite of having sufficiently small residual in the optimality conditions. This issue may be framed in terms of the discrepancy between forward error (the unmeasurable `true error’) and backward error (the measurable violation of optimality conditions). In his seminal work, Sturm provided an … Read more

A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints

Stochastic convex optimization problems with expectation constraints (SOECs) are encountered in statistics and machine learning, business, and engineering. In data-rich environments, the SOEC objective and constraints contain expectations defined with respect to large datasets. Therefore, efficient algorithms for solving such SOECs need to limit the fraction of data points that they use, which we refer … Read more

Robust stochastic optimization with the proximal point method

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. In this work, we show that a wide class of such algorithms on strongly convex problems can be augmented with sub-exponential confidence bounds at an overhead cost that is only … Read more

Distributionally robust chance constrained geometric optimization

This paper discusses distributionally robust geometric programs with individual and joint chance constraints. Seven groups of uncertainty sets are considered: uncertainty sets with first two order moments information, uncertainty sets constrained by the Kullback-Leibler divergence distance with a normal reference distribution or a discrete reference distribution, uncertainty sets with known first moments or known first … Read more

Tensor Methods for Finding Approximate Stationary Points of Convex Functions

In this paper we consider the problem of finding \epsilon-approximate stationary points of convex functions that are p-times differentiable with \nu-Hölder continuous pth derivatives. We present tensor methods with and without acceleration. Specifically, we show that the non-accelerated schemes take at most O(\epsilon^{-1/(p+\nu-1)}) iterations to reduce the norm of the gradient of the objective below … Read more

A family of multi-parameterized proximal point algorithms

In this paper, a multi-parameterized proximal point algorithm combining with a relaxation step is developed for solving convex minimization problem subject to linear constraints. We show its global convergence and sublinear convergence rate from the prospective of variational inequality. Preliminary numerical experiments on testing a sparse minimization problem from signal processing indicate that the proposed … Read more

The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning

Optimization of conflicting functions is of paramount importance in decision making, and real world applications frequently involve data that is uncertain or unknown, resulting in multi-objective optimization (MOO) problems of stochastic type. We study the stochastic multi-gradient (SMG) method, seen as an extension of the classical stochastic gradient method for single-objective optimization. At each iteration … Read more

Optimal K-Thresholding Algorithms for Sparse Optimization Problems

The simulations indicate that the existing hard thresholding technique independent of the residual function may cause a dramatic increase or numerical oscillation of the residual. This inherit drawback of the hard thresholding renders the traditional thresholding algorithms unstable and thus generally inefficient for solving practical sparse optimization problems. How to overcome this weakness and develop … Read more