On the intrinsic core of convex cones in real linear spaces

Convex cones play an important role in nonlinear analysis and optimization theory. In particular, specific normal cones and tangent cones are known to be convex cones, and it is a crucial fact that they are useful geometric objects for describing optimality conditions. As important applications (especially, in the fields of optimal control with PDE constraints, … Read more

A New Sequential Updating Scheme of the Lagrange Multiplier for Multi-Block Linearly Constrained Separable Convex Optimization with Relaxed Step Sizes

In various applications such as signal/image processing, data mining, statistical learning and etc., the multi-block linearly constrained separable convex optimization is frequently used, where the objective function is the sum of multiple individual convex functions, and the major constraints are linear. A classical method for solving such kind of optimization problem could be the alternating … Read more

On the asymptotic convergence and acceleration of gradient methods

We consider the asymptotic behavior of a family of gradient methods, which include the steepest descent and minimal gradient methods as special instances. It is proved that each method in the family will asymptotically zigzag between two directions. Asymptotic convergence results of the objective value, gradient norm, and stepsize are presented as well. To accelerate … Read more

Gaining traction – On the convergence of an inner approximation scheme for probability maximization

We analyze an inner approximation scheme for probability maximization. The approach was proposed in Fabian, Csizmas, Drenyovszki, Van Ackooij, Vajnai, Kovacs, Szantai (2018) Probability maximization by inner approximation, Acta Polytechnica Hungarica 15:105-125, as an analogue of a classic dual approach in the handling of probabilistic constraints. Even a basic implementation of the maximization scheme proved … Read more

Error Bounds and Singularity Degree in Semidefinite Programming

In semidefinite programming a proposed optimal solution may be quite poor in spite of having sufficiently small residual in the optimality conditions. This issue may be framed in terms of the discrepancy between forward error (the unmeasurable `true error’) and backward error (the measurable violation of optimality conditions). In his seminal work, Sturm provided an … Read more

A Data Efficient and Feasible Level Set Method for Stochastic Convex Optimization with Expectation Constraints

Stochastic convex optimization problems with expectation constraints (SOECs) are encountered in statistics and machine learning, business, and engineering. In data-rich environments, the SOEC objective and constraints contain expectations defined with respect to large datasets. Therefore, efficient algorithms for solving such SOECs need to limit the fraction of data points that they use, which we refer … Read more

Robust stochastic optimization with the proximal point method

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. In this work, we show that a wide class of such algorithms on strongly convex problems can be augmented with sub-exponential confidence bounds at an overhead cost that is only … Read more

Distributionally robust chance constrained geometric optimization

This paper discusses distributionally robust geometric programs with individual and joint chance constraints. Seven groups of uncertainty sets are considered: uncertainty sets with first two order moments information, uncertainty sets constrained by the Kullback-Leibler divergence distance with a normal reference distribution or a discrete reference distribution, uncertainty sets with known first moments or known first … Read more

Tensor Methods for Finding Approximate Stationary Points of Convex Functions

In this paper we consider the problem of finding \epsilon-approximate stationary points of convex functions that are p-times differentiable with \nu-Hölder continuous pth derivatives. We present tensor methods with and without acceleration. Specifically, we show that the non-accelerated schemes take at most O(\epsilon^{-1/(p+\nu-1)}) iterations to reduce the norm of the gradient of the objective below … Read more

A family of multi-parameterized proximal point algorithms

In this paper, a multi-parameterized proximal point algorithm combining with a relaxation step is developed for solving convex minimization problem subject to linear constraints. We show its global convergence and sublinear convergence rate from the prospective of variational inequality. Preliminary numerical experiments on testing a sparse minimization problem from signal processing indicate that the proposed … Read more