Adaptive Sampling Strategies for Stochastic Optimization

In this paper, we propose a stochastic optimization method that adaptively controls the sample size used in the computation of gradient approximations. Unlike other variance reduction techniques that either require additional storage or the regular computation of full gradients, the proposed method reduces variance by increasing the sample size as needed. The decision to increase … Read more

Convergence rates of accelerated proximal gradient algorithms under independent noise

We consider an accelerated proximal gradient algorithm for the composite optimization with “independent errors” (errors little related with historical information) for solving linear inverse problems. We present a new inexact version of FISTA algorithm considering deterministic and stochastic noises. We prove some convergence rates of the algorithm and we connect it with the current existing … Read more

Sieve-SDP: a simple facial reduction algorithm to preprocess semidefinite programs

We introduce Sieve-SDP, a simple algorithm to preprocess semidefinite programs (SDPs). Sieve-SDP belongs to the class of facial reduction algorithms. It inspects the constraints of the problem, deletes redundant rows and columns, and reduces the size of the variable matrix. It often detects infeasibility. It does not rely on any optimization solver: the only subroutine … Read more

Complete Facial Reduction in One Step for Spectrahedra

A spectrahedron is the feasible set of a semidefinite program, SDP, i.e., the intersection of an affine set with the positive semidefinite cone. While strict feasibility is a generic property for random problems, there are many classes of problems where strict feasibility fails and this means that strong duality can fail as well. If the … Read more

Estimates of generalized Hessians for optimal value functions in mathematical programming

The \emph{optimal value function} is one of the basic objects in the field of mathematical optimization, as it allows the evaluation of the variations in the \emph{cost/revenue} generated while \emph{minimizing/maximizing} a given function under some constraints. In the context of stability/sensitivity analysis, a large number of publications have been dedicated to the study of continuity … Read more

Exact worst-case convergence rates of the proximal gradient method for composite convex minimization

We study the worst-case convergence rates of the proximal gradient method for minimizing the sum of a smooth strongly convex function and a non-smooth convex function whose proximal operator is available. We establish the exact worst-case convergence rates of the proximal gradient method in this setting for any step size and for different standard performance … Read more

A Self-Correcting Variable-Metric Algorithm Framework for Nonsmooth Optimization

An algorithm framework is proposed for minimizing nonsmooth functions. The framework is variable-metric in that, in each iteration, a step is computed using a symmetric positive definite matrix whose value is updated as in a quasi-Newton scheme. However, unlike previously proposed variable-metric algorithms for minimizing nonsmooth functions, the framework exploits self-correcting properties made possible through … Read more

Resource Allocation for Contingency Planning: An Inexact Bundle Method for Stochastic Optimization

Resource allocation models in contingency planning aim to mitigate unexpected failures in supply chains due to disruptions with rare occurrence but disastrous consequences. This paper formulates this problems as a two-stage stochastic optimization with a risk-averse recourse function, and proposes a novel computationally tractable solution approach. The method relies on an inexact bundle method and … Read more

DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization

Machine learning with big data often involves large optimization models. For distributed optimization over a cluster of machines, frequent communication and synchronization of all model parameters (optimization variables) can be very costly. A promising solution is to use parameter servers to store different subsets of the model parameters, and update them asynchronously at different machines … Read more

A Level-set Method For Convex Optimization with a Feasible Solution Path

Large-scale constrained convex optimization problems arise in several application domains. First-order methods are good candidates to tackle such problems due to their low iteration complexity and memory requirement. The level-set framework extends the applicability of first-order methods to tackle problems with complicated convex objectives and constraint sets. Current methods based on this framework either rely … Read more