A dual spectral projected gradient method for log-determinant semidefinite problems

We extend the result on the spectral projected gradient method by Birgin et al in 2000 to a log-determinant semidefinite problem (SDP) with linear constraints and propose a spectral projected gradient method for the dual problem. Our method is based on alternate projections on the intersection of two convex sets, which first projects onto the … Read more

Acceleration of Primal-Dual Methods by Preconditioning and Simple Subproblem Procedures

Primal-Dual Hybrid Gradient (PDHG) and Alternating Direction Method of Multipliers (ADMM) are two widely-used first-order optimization methods. They reduce a difficult problem to simple subproblems, so they are easy to implement and have many applications. As first-order methods, however, they are sensitive to problem conditions and can struggle to reach the desired accuracy. To improve … Read more

Gradient methods exploiting spectral properties

We propose a new stepsize for the gradient method. It is shown that this new stepsize will converge to the reciprocal of the largest eigenvalue of the Hessian, when Dai-Yang’s asymptotic optimal gradient method (Computational Optimization and Applications, 2006, 33(1): 73-88) is applied for minimizing quadratic objective functions. Based on this spectral property, we develop … Read more

An Online-Learning Approach to Inverse Optimization

In this paper, we demonstrate how to learn the objective function of a decision-maker while only observing the problem input data and the decision-maker’s corresponding decisions over multiple rounds. Our approach is based on online learning and works for linear objectives over arbitrary feasible sets for which we have a linear optimization oracle. As such, … Read more

Projective Hedging for Stochastic Programming

We propose a decomposition algorithm for multistage stochastic programming that resembles the progressive hedging method of Rockafellar and Wets, but is provably capable of several forms of asynchronous operation. We derive the method from a class of projective operator splitting methods fairly recently proposed by Combettes and Eckstein, significantly expanding the known applications of those … Read more

Understanding the Acceleration Phenomenon via High-Resolution Differential Equations

Gradient-based optimization algorithms can be studied from the perspective of limiting or- dinary differential equations (ODEs). Motivated by the fact that existing ODEs do not distin- guish between two fundamentally different algorithms—Nesterov’s accelerated gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method—we study an alter- native limiting process that yields high-resolution ODEs. We … Read more

Condition Number Analysis of Logistic Regression, and its Implications for Standard First-Order Solution Methods

Logistic regression is one of the most popular methods in binary classification, wherein estimation of model parameters is carried out by solving the maximum likelihood (ML) optimization problem, and the ML estimator is defined to be the optimal solution of this problem. It is well known that the ML estimator exists when the data is … Read more

Exploiting Low-Rank Structure in Semidefinite Programming by Approximate Operator Splitting

In contrast with many other convex optimization classes, state-of-the-art semidefinite programming solvers are yet unable to efficiently solve large scale instances. This work aims to reduce this scalability gap by proposing a novel proximal algorithm for solving general semidefinite programming problems. The proposed methodology, based on the primal-dual hybrid gradient method, allows the presence of … Read more

Hamiltonian Descent Methods

We propose a family of optimization methods that achieve linear convergence using first-order gradient information and constant step sizes on a class of convex functions much larger than the smooth and strongly convex ones. This larger class includes functions whose second derivatives may be singular or unbounded at their minima. Our methods are discretizations of … Read more

Universal Barrier is n-Self-Concordant

This paper shows that the self-concordance parameter of the universal barrier on any n-dimensional proper convex domain is upper bounded by n. This bound is tight and improves the previous O(n) bound by Nesterov and Nemirovski. The key to our main result is a pair of new, sharp moment inequalities for s-concave distributions, which could … Read more