An Online-Learning Approach to Inverse Optimization

In this paper, we demonstrate how to learn the objective function of a decision-maker while only observing the problem input data and the decision-maker’s corresponding decisions over multiple rounds. Our approach is based on online learning and works for linear objectives over arbitrary feasible sets for which we have a linear optimization oracle. As such, … Read more

Projective Hedging for Stochastic Programming

We propose a decomposition algorithm for multistage stochastic programming that resembles the progressive hedging method of Rockafellar and Wets, but is provably capable of several forms of asynchronous operation. We derive the method from a class of projective operator splitting methods fairly recently proposed by Combettes and Eckstein, significantly expanding the known applications of those … Read more

Understanding the Acceleration Phenomenon via High-Resolution Differential Equations

Gradient-based optimization algorithms can be studied from the perspective of limiting or- dinary differential equations (ODEs). Motivated by the fact that existing ODEs do not distin- guish between two fundamentally different algorithms—Nesterov’s accelerated gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method—we study an alter- native limiting process that yields high-resolution ODEs. We … Read more

Condition Number Analysis of Logistic Regression, and its Implications for Standard First-Order Solution Methods

Logistic regression is one of the most popular methods in binary classification, wherein estimation of model parameters is carried out by solving the maximum likelihood (ML) optimization problem, and the ML estimator is defined to be the optimal solution of this problem. It is well known that the ML estimator exists when the data is … Read more

Exploiting Low-Rank Structure in Semidefinite Programming by Approximate Operator Splitting

In contrast with many other convex optimization classes, state-of-the-art semidefinite programming solvers are yet unable to efficiently solve large scale instances. This work aims to reduce this scalability gap by proposing a novel proximal algorithm for solving general semidefinite programming problems. The proposed methodology, based on the primal-dual hybrid gradient method, allows the presence of … Read more

Hamiltonian Descent Methods

We propose a family of optimization methods that achieve linear convergence using first-order gradient information and constant step sizes on a class of convex functions much larger than the smooth and strongly convex ones. This larger class includes functions whose second derivatives may be singular or unbounded at their minima. Our methods are discretizations of … Read more

Universal Barrier is n-Self-Concordant

This paper shows that the self-concordance parameter of the universal barrier on any n-dimensional proper convex domain is upper bounded by n. This bound is tight and improves the previous O(n) bound by Nesterov and Nemirovski. The key to our main result is a pair of new, sharp moment inequalities for s-concave distributions, which could … Read more

Deep Neural Network Structures Solving Variational Inequalities

We propose a novel theoretical framework to investigate deep neural networks using the formalism of proximal fixed point methods for solving variational inequalities. We first show that almost all activation functions used in neural networks are actually proximity operators. This leads to an algorithmic model alternating firmly nonexpansive and linear operators. We derive new results … Read more

A proximal ADMM with the Broyden family for Convex Optimization Problems

Alternating direction methods of multipliers (ADMM) have been well studied and effectively used in various application fields. The classical ADMM must solve two subproblems exactly at each iteration. To overcome the difficulty of computing the exact solution of the subproblems, some proximal terms are added to the subproblems. Recently, Gu and Yamashita studied a special … Read more

The primal-dual hybrid gradient method reduces to a primal method for linearly constrained optimization problems

In this work, we show that for linearly constrained optimization problems the primal-dual hybrid gradient algorithm, analyzed by Chambolle and Pock [3], can be written as an entirely primal algorithm. This allows us to prove convergence of the iterates even in the degenerate cases when the linear system is inconsistent or when the strong duality … Read more