Novel stepsize for some accelerated and stochastic optimization methods

New first-order methods now need to be improved to keep up with the constant developments in machine learning and mathematics. They are commonly used methods to solve optimization problems. Among them, the algorithm branch based on gradient descent has developed rapidly with good results achieved. Not out of that trend, in this article, we research … Read more

On Common-Random-Numbers and the Complexity of Adaptive Sampling Trust-Region Methods

\(\) In the context of simulation optimization (SO), Common Random Numbers (CRN) is the practice of querying the simulation-based oracle with the same random number stream at each point visited by an SO algorithm. This practice is widely believed to facilitate SO algorithm efficiency by preserving structure inherent to the objective function and gradient sample-paths. … Read more

An Inexact Proximal-indefinite Stochastic ADMM with applications in 3D CT reconstruction

In this paper, we develop an Inexact Proximal-indefinite Stochastic ADMM (abbreviated as IPS-ADMM) for solving a class of separable convex optimization problems whose objective functions consist of two parts: one is an average of many smooth convex functions and another is a convex but possibly nonsmooth function. The involved smooth subproblem is tackled by an … Read more

Nonexpansive Markov Operators and Random Function Iterations for Stochastic Fixed Point Problems

We study the convergence of random function iterations for finding an invariant measure of the corresponding Markov operator. We call the problem of finding such an invariant mea- sure the stochastic fixed point problem. This generalizes earlier work studying the stochastic feasibility problem, namely, to find points that are, with probability 1, fixed points of … Read more

Expected Value of Matrix Quadratic Forms with Wishart distributed Random Matrices

To explore the limits of a stochastic gradient method, it may be useful to consider an example consisting of an infinite number of quadratic functions. In this context, it is appropriate to determine the expected value and the covariance matrix of the stochastic noise, i.e. the difference of the true gradient and the approximated gradient … Read more

Finding Groups with Maximum Betweenness Centrality via Integer Programming with Random Path Sampling

One popular approach to access the importance/influence of a group of nodes in a network is based on the notion of centrality. For a given group, its group betweenness centrality is computed, first, by evaluating a ratio of shortest paths between each node pair in a network that are “covered” by at least one node … Read more

Optimized convergence of stochastic gradient descent by weighted averaging

Under mild assumptions stochastic gradient methods asymptotically achieve an optimal rate of convergence if the arithmetic mean of all iterates is returned as an approximate optimal solution. However, in the absence of stochastic noise, the arithmetic mean of all iterates converges considerably slower to the optimal solution than the iterates themselves. And also in the … Read more

On the first order optimization methods in Deep Image Prior

Deep learning methods have state-of-the-art performances in many image restoration tasks. Their effectiveness is mostly related to the size of the dataset used for the training. Deep Image Prior (DIP) is an energy function framework which eliminates the dependency on the training set, by considering the structure of a neural network as an handcrafted prior … Read more

An Algorithm for Stochastic Convex-Concave Fractional Programs with Applications to Production Efficiency and Equitable Resource Allocation

We propose an algorithm to solve convex and concave fractional programs and their stochastic counterparts in a common framework. Our approach is based on a novel reformulation that involves differences of square terms in the constraints, and subsequent employment of piecewise-linear approximations of the concave terms. Using the branch-and-bound (B&B) framework, our algorithm adaptively refines … Read more

Constrained stochastic blackbox optimization using a progressive barrier and probabilistic estimates

This work introduces the StoMADS-PB algorithm for constrained stochastic blackbox optimization, which is an extension of the mesh adaptive direct-search (MADS) method originally developed for deterministic blackbox optimization under general constraints. The values of the objective and constraint functions are provided by a noisy blackbox, i.e., they can only be computed with random noise whose … Read more