Expected Value of Matrix Quadratic Forms with Wishart distributed Random Matrices

To explore the limits of a stochastic gradient method, it may be useful to consider an example consisting of an infinite number of quadratic functions. In this context, it is appropriate to determine the expected value and the covariance matrix of the stochastic noise, i.e. the difference of the true gradient and the approximated gradient … Read more

Finding Groups with Maximum Betweenness Centrality via Integer Programming with Random Path Sampling

One popular approach to access the importance/influence of a group of nodes in a network is based on the notion of centrality. For a given group, its group betweenness centrality is computed, first, by evaluating a ratio of shortest paths between each node pair in a network that are “covered” by at least one node … Read more

Optimized convergence of stochastic gradient descent by weighted averaging

Under mild assumptions stochastic gradient methods asymptotically achieve an optimal rate of convergence if the arithmetic mean of all iterates is returned as an approximate optimal solution. However, in the absence of stochastic noise, the arithmetic mean of all iterates converges considerably slower to the optimal solution than the iterates themselves. And also in the … Read more

On the first order optimization methods in Deep Image Prior

Deep learning methods have state-of-the-art performances in many image restoration tasks. Their effectiveness is mostly related to the size of the dataset used for the training. Deep Image Prior (DIP) is an energy function framework which eliminates the dependency on the training set, by considering the structure of a neural network as an handcrafted prior … Read more

An Algorithm for Stochastic Convex-Concave Fractional Programs with Applications to Production Efficiency and Equitable Resource Allocation

We propose an algorithm to solve convex and concave fractional programs and their stochastic counterparts in a common framework. Our approach is based on a novel reformulation that involves differences of square terms in the constraints, and subsequent employment of piecewise-linear approximations of the concave terms. Using the branch-and-bound framework, our algorithm adaptively refines the … Read more

Constrained stochastic blackbox optimization using a progressive barrier and probabilistic estimates

This work introduces the StoMADS-PB algorithm for constrained stochastic blackbox optimization, which is an extension of the mesh adaptive direct-search (MADS) method originally developed for deterministic blackbox optimization under general constraints. The values of the objective and constraint functions are provided by a noisy blackbox, i.e., they can only be computed with random noise whose … Read more

An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias

Structured non-convex learning problems, for which critical points have favorable statistical properties, arise frequently in statistical machine learning. Algorithmic convergence and statistical estimation rates are well-understood for such problems. However, quantifying the uncertainty associated with the underlying training algorithm is not well-studied in the non-convex setting. In order to address this short-coming, in this work, … Read more

K-Adaptability in stochastic optimization

We consider stochastic problems in which both the objective function and the feasible set are affected by uncertainty. We address these problems using a K-adaptability approach, in which K solutions for the underlying problem are computed before the uncertainty dissolves and afterwards the best of them can be chosen for the realised scenario. This paradigm … Read more

Expected complexity analysis of stochastic direct-search

This work presents the convergence rate analysis of stochastic variants of the broad class of direct-search methods of directional type. It introduces an algorithm designed to optimize differentiable objective functions $f$ whose values can only be computed through a stochastically noisy blackbox. The proposed stochastic directional direct-search (SDDS) algorithm accepts new iterates by imposing a … Read more

A dimensionality reduction technique for unconstrained global optimization of functions with low effective dimensionality

We investigate the unconstrained global optimization of functions with low effective dimensionality, that are constant along certain (unknown) linear subspaces. Extending the technique of random subspace embeddings in [Wang et al., Bayesian optimization in a billion dimensions via random embeddings. JAIR, 55(1): 361–387, 2016], we study a generic Random Embeddings for Global Optimization (REGO) framework … Read more