Relaxed Proximal Point Algorithm: Tight Complexity Bounds and Acceleration without Momentum

In this paper, we focus on the relaxed proximal point algorithm (RPPA) for solving convex (possibly nonsmooth) optimization problems. We conduct a comprehensive study on three types of relaxation schedules: (i) constant schedule with relaxation parameter \(\alpha_k\equiv \alpha \in (0, \sqrt{2}]\), (ii) the dynamic schedule put forward by Teboulle and Vaisbourd [TV23], and (iii) the … Read more

TRFD: A Derivative-Free Trust-Region Method Based on Finite Differences for Composite Nonsmooth Optimization

In this work we present TRFD, a derivative-free trust-region method based on finite differences for minimizing composite functions of the form \(f(x)=h(F(x))\), where \(F\) is a black-box function assumed to have a Lipschitz continuous Jacobian, and \(h\) is a known convex Lipschitz function, possibly nonsmooth. The method approximates the Jacobian of \(F\) via forward finite … Read more

Tuning-Free Bilevel Optimization: New Algorithms and Convergence Analysis

Bilevel optimization has recently attracted considerable attention due to its abundant applications in machine learning problems. However, existing methods rely on prior knowledge of problem parameters to determine stepsizes, resulting in significant effort in tuning stepsizes when these parameters are unknown. In this paper, we propose two novel tuning-free algorithms, D-TFBO and S-TFBO. D-TFBO employs … Read more

A Generalization Result for Convergence in Learning-to-Optimize

Convergence in learning-to-optimize is hardly studied, because conventional convergence guarantees in optimization are based on geometric arguments, which cannot be applied easily to learned algorithms. Thus, we develop a probabilistic framework that resembles deterministic optimization and allows for transferring geometric arguments into learning-to-optimize. Our main theorem is a generalization result for parametric classes of potentially … Read more

Lipschitz-free Projected Subgradient Method with Time-varying Step-size

We introduce a novel time-varying step-size for the classical projected subgradient method, offering optimal ergodic convergence. Importantly, this approach does not depend on the Lipschitz assumption of the objective function, thereby broadening the convergence result of projected subgradient method to non-Lipschitz case. ArticleDownload View PDF

Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits

Follow-The-Regularized-Leader (FTRL) algorithms often enjoy optimal regret for adversarial as well as stochastic bandit problems and allow for a streamlined analysis. However, FTRL algorithms require the solution of an optimization problem in every iteration and are thus computationally challenging. In contrast, Follow-The-Perturbed-Leader (FTPL) algorithms achieve computational efficiency by perturbing the estimates of the rewards of … Read more

Efficient parameter-free restarted accelerated gradient methods for convex and strongly convex optimization

This paper develops a new parameter-free restarted method, namely RPF-SFISTA, and a new parameter-free aggressive regularization method, namely A-REG, for solving strongly convex and convex composite optimization problems, respectively. RPF-SFISTA has the major advantage that it requires no knowledge of both the strong convexity parameter of the entire composite objective and the Lipschitz constant of … Read more

Accessible Complexity Bounds for Restarted PDHG on Linear Programs with a Unique Optimizer

The restarted primal-dual hybrid gradient method (rPDHG) has recently emerged as an important tool for solving large-scale linear programs (LPs). For LPs with unique optima, we present an iteration bound of \(O\left(\kappa\Phi\cdot\ln\left(\frac{\kappa\Phi\|w^*\|}{\varepsilon}\right)\right)\), where \(\varepsilon\) is the target tolerance, \(\kappa\) is the standard matrix condition number, \(\|w^*\|\) is the norm of the optimal solution, and \(\Phi\) … Read more

Dual Spectral Projected Gradient Method for Generalized Log-det Semidefinite Programming

Log-det semidefinite programming (SDP) problems are optimization problems that often arise from Gaussian graphic models. A log-det SDP problem with an l1-norm term has been examined in many methods, and the dual spectral projected gradient (DSPG) method by Nakagaki et al.~in 2020 is designed to efficiently solve the dual problem of the log-det SDP by … Read more

An inertial projective splitting method for the sum of two maximal monotone operators

We propose a projective splitting type method to solve the problem of finding a zero of the sum of two maximal monotone operators. Our method considers inertial and relaxation steps, and also allows inexact solutions of the proximal subproblems within a relative-error criterion.We study the asymptotic convergence of the method, as well as its iteration-complexity. … Read more