Spectral Stochastic Gradient Method with Additional Sampling for Finite and Infinite Sums

In this paper, we propose a new stochastic gradient method for numerical minimization of finite sums. We also propose a modified version of this method applicable on more general problems referred to as infinite sum problems, where the objective function is in the form of mathematical expectation. The method is based on a strategy to … Read more

Global Optimization Algorithm through High-Resolution Sampling

We present an optimization algorithm that can identify a global minimum of a potentially nonconvex smooth function with high probability, assuming the Gibbs measure of the potential satisfies a logarithmic Sobolev inequality. Our contribution is twofold: on the one hand we propose a global optimization method, which is built on an oracle sampling algorithm producing … Read more

A graphical framework for global optimization of mixed-integer nonlinear programs

While mixed-integer linear programming and convex programming solvers have advanced significantly over the past several decades, solution technologies for general mixed-integer nonlinear programs (MINLPs) have yet to reach the same level of maturity. Various problem structures across different application domains remain challenging to model and solve using modern global solvers, primarily due to the lack … Read more

An inertial projective splitting method for the sum of two maximal monotone operators

We propose a projective splitting type method to solve the problem of finding a zero of the sum of two maximal monotone operators. Our method considers inertial and relaxation steps, and also allows inexact solutions of the proximal subproblems within a relative-error criterion.We study the asymptotic convergence of the method, as well as its iteration-complexity. … Read more

A Markovian Model for Learning-to-Optimize

We present a probabilistic model for stochastic iterative algorithms with the use case of optimization algorithms in mind. Based on this model, we present PAC-Bayesian generalization bounds for functions that are defined on the trajectory of the learned algorithm, for example, the expected (non-asymptotic) convergence rate and the expected time to reach the stopping criterion. … Read more

Double-proximal augmented Lagrangian methods with improved convergence condition

In this paper, we consider a family of linearly constrained convex minimization problems whose objective function is not necessarily smooth. A basic double-proximal augmented Lagrangian method (DP-ALM) is developed, which enjoys a flexible dual stepsize and a proximal subproblem with relatively smaller proximal parameter. By a novel prediction-correction reformulation for the proposed DP-ALM and by … Read more

Global Optimization of Non-Linear Systems of Equations by Simulating the Flight of a Projectile in the Conformational Space

A new heuristic optimization algorithm is presented based on an analogy with the physical phenomenon of a projectile launched in a conformational space under the influence of a gravitational force. Its implementation simplicity and the option to enhance it with local search methods make it ideal for the optimization of non-linear systems of equations. The … Read more

An Adaptive Proximal ADMM for Nonconvex Linearly-Constrained Composite Programs

This paper develops an adaptive Proximal Alternating Direction Method of Multipliers (P-ADMM) for solving linearly-constrained, weakly convex, composite optimization problems. This method is adaptive to all problem parameters, including smoothness and weak convexity constants. It is assumed that the smooth component of the objective is weakly convex and possibly nonseparable, while the non-smooth component is … Read more

Regularized Gradient Clipping Provably Trains Wide and Deep Neural Networks

In this work, we instantiate a regularized form of the gradient clipping algorithm and prove that it can converge to the global minima of deep neural network loss functions provided that the net is of sufficient width. We present empirical evidence that our theoretically founded regularized gradient clipping algorithm is also competitive with the state-of-the-art … Read more

Globally Convergent Derivative-Free Methods in Nonconvex Optimization with and without Noise

This paper addresses the study of nonconvex derivative-free optimization problems, where only information of either smooth objective functions or their noisy approximations is available. General derivative-free methods are proposed for minimizing differentiable (not necessarily convex) functions with globally Lipschitz continuous gradients, where the accuracy of approximate gradients is interacting with stepsizes and exact gradient values. … Read more