Online Convex Optimization Perspective for Learning from Dynamically Revealed Preferences

We study the problem of online learning (OL) from revealed preferences: a learner wishes to learn an agent’s private utility function through observing the agent’s utility-maximizing actions in a changing environment. We adopt an online inverse optimization setup, where the learner observes a stream of agent’s actions in an online fashion and the learning performance … Read more

Learning Dynamical Systems with Side Information

We present a mathematical and computational framework for the problem of learning a dynamical system from noisy observations of a few trajectories and subject to side information. Side information is any knowledge we might have about the dynamical system we would like to learn besides trajectory data. It is typically inferred from domain-specific knowledge or … Read more

A Modified Proximal Symmetric ADMM for Multi-Block Separable Convex Optimization with Linear Constraints

We consider the linearly constrained separable convex optimization problem whose objective function is separable w.r.t. $m$ blocks of variables. A bunch of methods have been proposed and well studied. Specifically, a modified strictly contractive Peaceman-Rachford splitting method (SC-PRCM) has been well studied in the literature for the special case of $m=3$. Based on the modified … Read more

Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters

Many key problems in machine learning and data science are routinely modeled as optimization problems and solved via optimization algorithms. With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to … Read more

A geodesic interior-point method for linear optimization over symmetric cones

We develop a new interior-point method (IPM) for symmetric-cone optimization, a common generalization of linear, second-order-cone, and semidefinite programming. In contrast to classical IPMs, we update iterates with a geodesic of the cone instead of the kernel of the linear constraints. This approach yields a primal-dual-symmetric, scale-invariant, and line-search-free algorithm that uses just half the … Read more

Decentralized Learning with Lazy and Approximate Dual Gradients

This paper develops algorithms for decentralized machine learning over a network, where data are distributed, computation is localized, and communication is restricted between neighbors. A line of recent research in this area focuses on improving both computation and communication complexities. The methods SSDA and MSDA \cite{scaman2017optimal} have optimal communication complexity when the objective is smooth … Read more

Convergence analysis under consistent error bounds

We introduce the notion of consistent error bound functions which provides a unifying framework for error bounds for multiple convex sets. This framework goes beyond the classical Lipschitzian and Holderian error bounds and includes logarithmic and entropic error bound found in the exponential cone. It also includes the error bounds obtainable under the theory of … Read more

Tight bounds on Lyapunov rank

The Lyapunov rank of a cone is the number of independent equations obtainable from an analogue of the complementary slackness condition in cone programming problems, and more equations are generally thought to be better. Bounding the Lyapunov rank of a proper cone in R^n from above is an open problem. Gowda and Tao gave an … Read more

Proscribed normal decompositions of Euclidean Jordan algebras

Normal decomposition systems unify many results from convex matrix analysis regarding functions that are invariant with respect to a group of transformations—particularly those matrix functions that are unitarily-invariant and the affiliated permutation-invariant “spectral functions” that depend only on eigenvalues. Spectral functions extend in a natural way to Euclidean Jordan algebras, and several authors have studied … Read more

On the strong concavity of the dual function of an optimization problem

We provide three new proofs of the strong concavity of the dual function of some convex optimization problems. For problems with nonlinear constraints, we show that the the assumption of strong convexity of the objective cannot be weakened to convexity and that the assumption that the gradients of all constraints at the optimal solution are … Read more