Properties of the delayed weighted gradient method

The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior … Read more

A Regularized Smoothing Method for Fully Parameterized Convex Problems with Applications to Convex and Nonconvex Two-Stage Stochastic Programming

We present an approach to regularize and approximate solution mappings of parametric convex optimization problems that combines interior penalty (log-barrier) solutions with Tikhonov regularization. Because the regularized mappings are single-valued and smooth under reasonable conditions, they can be used to build a computationally practical smoothing for the associated optimal value function. The value function in … Read more

Fast Multiple Splitting Algorithms for Convex Optimization

We present in this paper two different classes of general $K$-splitting algorithms for solving finite-dimensional convex optimization problems. Under the assumption that the function being minimized has a Lipschitz continuous gradient, we prove that the number of iterations needed by the first class of algorithms to obtain an $\epsilon$-optimal solution is $O(1/\epsilon)$. The algorithms in … Read more

A gradient-based approach for computing Nash equilibria of large sequential games

We propose a new gradient based scheme to approximate Nash equilibria of large sequential two-player, zero-sum games. The algorithm uses modern smoothing techniques for saddle-point problems tailored specifically for the polytopes used in the Nash equilibrium problem. CitationWorking Paper, Tepper School of Business, Carnegie Mellon UniversityArticleDownload View PDF