Stochastic optimization and sparse statistical recovery: An optimal algorithm for high dimensions

We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse. Previous approaches are able to exploit only one of these two structures, yielding an $\order(\pdim/T)$ convergence rate for strongly convex objectives in $\pdim$ dimensions, and an $\order(\sqrt{(\spindex \log \pdim)/T})$ convergence rate when … Read more

Recent Advances in Robust Optimization and Robustness: An Overview

This paper provides an overview of developments in robust optimization and robustness published in the academic literature over the past five years. Citation Technical report, LAMSADE, Universite Paris-Dauphine, Paris, France. (2012) Article Download View Recent Advances in Robust Optimization and Robustness: An Overview

An acceleration procedure for optimal first-order methods

We introduce in this paper an optimal first-order method that allows an easy and cheap evaluation of the local Lipschitz constant of the objective’s gradient. This constant must ideally be chosen at every iteration as small as possible, while serving in an indispensable upper bound for the value of the objective function. In the previously … Read more

Optimality conditions for the nonlinear programming problems on Riemannian manifolds

In recent years, many traditional optimization methods have been successfully generalized to minimize objective functions on manifolds. In this paper, we first extend the general traditional constrained optimization problem to a nonlinear programming problem built upon a general Riemannian manifold $\mathcal{M}$, and discuss the first-order and the second-order optimality conditions. By exploiting the differential geometry … Read more

Multi-Range Robust Optimization vs Stochastic Programming in Prioritizing Project Selection

This paper describes a multi-range robust optimization approach applied to the problem of capacity investment under uncertainty. In multi-range robust optimization, an uncertain parameter is allowed to take values from more than one uncertainty range. We consider a number of possible projects with anticipated costs and cash flows, and an investment decision to be made … Read more

Polyhedral Aspects of Self-Avoiding Walks

In this paper, we study self-avoiding walks of a given length on a graph. We consider a formulation of this problem as a binary linear program. We analyze the polyhedral structure of the underlying polytope and describe valid inequalities. Proofs for their facial properties for certain special cases are given. In a variation of this … Read more

A variable smoothing algorithm for solving convex optimization problems

In this article we propose a method for solving unconstrained optimization problems with convex and Lipschitz continuous objective functions. By making use of the Moreau envelopes of the functions occurring in the objective, we smooth the latter to a convex and differentiable function with Lipschitz continuous gradient by using both variable and constant smoothing parameters. … Read more

A note on the convergence of the SDDP algorithm

In this paper we are interested in the convergence analysis of the Stochastic Dual Dynamic Algorithm (SDDP) algorithm in a general framework, and regardless of whether the underlying probability space is discrete or not. We consider a convex stochastic control program not necessarily linear and the resulting dynamic programming equation. We prove under mild assumptions … Read more

Interior point methods for sufficient LCP in a wide neighborhood of the central path with optimal iteration complexity

Three interior point methods are proposed for sufficient horizontal linear complementarity problems (HLCP): a large update path following algorithm, a first order corrector-predictor method, and a second order corrector-predictor method. All algorithms produce sequences of iterates in the wide neighborhood of the central path introduced by Ai and Zhang. The algorithms do not depend on … Read more

Improving the Performance of Stochastic Dual Dynamic Programming

This paper is concerned with tuning the Stochastic Dual Dynamic Programming algorithm to make it more computationally efficient. We report the results of some computational experiments on a large-scale hydrothermal scheduling model developed for Brazil. We find that the best improvements in computation time are obtained from an implementation that increases the number of scenarios … Read more