A Generalized Voting Game for Categorical Network Choices

This paper presents a game-theoretical framework for data classification and network discovery, focusing on pairwise influences in multivariate choices. The framework consists of two complementary games in which individuals, connected through a signed weighted graph, exhibit network similarity. A voting rule captures the influence of an individual’s neighbors, categorized as attractive (friend-like) or repulsive (enemy-like), … Read more

Forecasting Outside the Box: Application-Driven Optimal Pointwise Forecasts for Stochastic Optimization

The exponential growth in data availability in recent years has led to new formulations of data-driven optimization problems. One such formulation is that of stochastic optimization problems with contextual information, where the goal is to optimize the expected value of a certain function given some contextual information (also called features) that accompany the main data … Read more

Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits

Follow-The-Regularized-Leader (FTRL) algorithms often enjoy optimal regret for adversarial as well as stochastic bandit problems and allow for a streamlined analysis. However, FTRL algorithms require the solution of an optimization problem in every iteration and are thus computationally challenging. In contrast, Follow-The-Perturbed-Leader (FTPL) algorithms achieve computational efficiency by perturbing the estimates of the rewards of … Read more

Forecasting Urban Traffic States with Sparse Data Using Hankel Temporal Matrix Factorization

Forecasting urban traffic states is crucial to transportation network monitoring and management, playing an important role in the decision-making process. Despite the substantial progress that has been made in developing accurate, efficient, and reliable algorithms for traffic forecasting, most existing approaches fail to handle sparsity, high-dimensionality, and nonstationarity in traffic time series and seldom consider … Read more

Regularized Gradient Clipping Provably Trains Wide and Deep Neural Networks

In this work, we instantiate a regularized form of the gradient clipping algorithm and prove that it can converge to the global minima of deep neural network loss functions provided that the net is of sufficient width. We present empirical evidence that our theoretically founded regularized gradient clipping algorithm is also competitive with the state-of-the-art … Read more

Predictive Low Rank Matrix Learning under Partial Observations: Mixed-Projection ADMM

We study the problem of learning a partially observed matrix under the low rank assumption in the presence of fully observed side information that depends linearly on the true underlying matrix. This problem consists of an important generalization of the Matrix Completion problem, a central problem in Statistics, Operations Research and Machine Learning, that arises … Read more

A Stochastic Objective-Function-Free Adaptive Regularization Method with Optimal Complexity

A fully stochastic second-order adaptive-regularization method for unconstrained nonconvex optimization is presented which never computes the objective-function value, but yet achieves the optimal $\mathcal{O}(\epsilon^{-3/2})$ complexity bound for finding first-order critical points. The method is noise-tolerant and the inexactness conditions required for convergence depend on the history of past steps. Applications to cases where derivative evaluation … Read more

Complexity of Adagrad and other first-order methods for nonconvex optimization problems with bounds constraints

A parametric class of trust-region algorithms for constrained nonconvex optimization is analyzed, where the objective function is never computed. By defining appropriate first-order stationarity criteria, we are able to extend the Adagrad method to the newly considered problem and retrieve the standard complexity rate of the projected gradient method that uses both the gradient and … Read more

An Extended Validity Domain for Constraint Learning

We consider embedding a predictive machine-learning model within a prescriptive optimization problem. In this setting, called constraint learning, we study the concept of a validity domain, i.e., a constraint added to the feasible set, which keeps the optimization close to the training data, thus helping to ensure that the computed optimal solution exhibits less prediction … Read more

A mathematical introduction to SVMs with self-concordant kernel

A derivation of so-called “soft-margin support vector machines with kernel” is presented along with elementary proofs that do not rely on concepts from functional analysis such as Mercer’s theorem or reproducing kernel Hilbert spaces which are frequently cited in this context. The analysis leads to new continuity properties of the kernel functions, in particular a … Read more