Sequential test sampling for stochastic derivative-free optimization

In many derivative-free optimization algorithms, a sufficient decrease condition decides whether to accept a trial step in each iteration. This condition typically requires that the potential objective function value decrease of the trial step, i.e., the true reduction in the objective function value that would be achieved by moving from the current point to the … Read more

Sequential test sampling for stochastic derivative-free optimization

In many derivative-free optimization algorithms, a sufficient decrease condition decides whether to accept a trial step in each iteration. This condition typically requires that the potential objective function value decrease of the trial step, i.e., the true reduction in the objective function value that would be achieved by moving from the current point to the … Read more

On the convergence rate of the Douglas-Rachford splitting algorithm

This work is concerned with the convergence rate analysis of the Dou- glas–Rachford splitting (DRS) method for finding a zero of the sum of two maximally monotone operators. We obtain an exact rate of convergence for the DRS algorithm and demonstrate its sharpness in the setting of convex feasibility problems. Further- more, we investigate the … Read more

A Markovian Model for Learning-to-Optimize

We present a probabilistic model for stochastic iterative algorithms with the use case of optimization algorithms in mind. Based on this model, we present PAC-Bayesian generalization bounds for functions that are defined on the trajectory of the learned algorithm, for example, the expected (non-asymptotic) convergence rate and the expected time to reach the stopping criterion. … Read more

Fast convergence of the primal-dual dynamical system and algorithms for a nonsmooth bilinearly coupled saddle point problem

This paper is devoted to study the convergence rates of a second-order dynamical system and its corresponding discretizations associated with a nonsmooth bilinearly coupled convex-concave saddle point problem. We derive the convergence rate of the primal-dual gap for the second-order dynamical system with asymptotically vanishing damping term. Based on the implicit discretization, we propose a … Read more

The convergence rate of the Sandwiching algorithm for convex bounded multiobjective optimization

Sandwiching algorithms, also known as Benson-type algorithms, approximate the nondominated set of convex bounded multiobjective optimization problems by constructing and iteratively improving polyhedral inner and outer approximations. Using a set-valued metric, an estimate of the approximation quality is determined as the distance between the inner and outer approximation. The convergence of the algorithm is evaluated … Read more

Goldstein Stationarity in Lipschitz Constrained Optimization

We prove the first convergence guarantees for a subgradient method minimizing a generic Lipschitz function over generic Lipschitz inequality constraints. No smoothness or convexity (or weak convexity) assumptions are made. Instead, we utilize a sequence of recent advances in Lipschitz unconstrained minimization, which showed convergence rates of $O(1/\delta\epsilon^3)$ towards reaching a “Goldstein” stationary point, that … Read more

Exact convergence rate of the last iterate in subgradient methods

We study the convergence of the last iterate in subgradient methods applied to the minimization of a nonsmooth convex function with bounded subgradients. We first introduce a proof technique that generalizes the standard analysis of subgradient methods. It is based on tracking the distance between the current iterate and a different reference point at each … Read more

On a Frank-Wolfe Approach for Abs-smooth Functions

We propose an algorithm which appears to be the first bridge between the fields of conditional gradient methods and abs-smooth optimization. Our problem setting is motivated by various applications that lead to nonsmoothness, such as $\ell_1$ regularization, phase retrieval problems, or ReLU activation in machine learning. To handle the nonsmoothness in our problem, we propose … Read more

Faster Lagrangian-based methods: a unified prediction-correction framework

Motivated by the prediction-correction framework constructed by He and Yuan [SIAM J. Numer. Anal. 50: 700-709, 2012], we propose a unified prediction-correction framework to accelerate Lagrangian-based methods. More precisely, for strongly convex optimization, general linearized Lagrangian method with indefinite proximal term, alternating direction method of multipliers (ADMM) with the step size of Lagrangian multiplier not … Read more