Tuning-Free Bilevel Optimization: New Algorithms and Convergence Analysis

Bilevel optimization has recently attracted considerable attention due to its abundant applications in machine learning problems. However, existing methods rely on prior knowledge of problem parameters to determine stepsizes, resulting in significant effort in tuning stepsizes when these parameters are unknown. In this paper, we propose two novel tuning-free algorithms, D-TFBO and S-TFBO. D-TFBO employs … Read more

A Generalization Result for Convergence in Learning-to-Optimize

Convergence in learning-to-optimize is hardly studied, because conventional convergence guarantees in optimization are based on geometric arguments, which cannot be applied easily to learned algorithms. Thus, we develop a probabilistic framework that resembles deterministic optimization and allows for transferring geometric arguments into learning-to-optimize. Our main theorem is a generalization result for parametric classes of potentially … Read more

Lipschitz-free Projected Subgradient Method with Time-varying Step-size

We introduce a novel time-varying step-size for the classical projected subgradient method, offering optimal ergodic convergence. Importantly, this approach does not depend on the Lipschitz assumption of the objective function, thereby broadening the convergence result of projected subgradient method to non-Lipschitz case. ArticleDownload View PDF

Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits

Follow-The-Regularized-Leader (FTRL) algorithms often enjoy optimal regret for adversarial as well as stochastic bandit problems and allow for a streamlined analysis. However, FTRL algorithms require the solution of an optimization problem in every iteration and are thus computationally challenging. In contrast, Follow-The-Perturbed-Leader (FTPL) algorithms achieve computational efficiency by perturbing the estimates of the rewards of … Read more

Efficient parameter-free restarted accelerated gradient methods for convex and strongly convex optimization

This paper develops a new parameter-free restarted method, namely RPF-SFISTA, and a new parameter-free aggressive regularization method, namely A-REG, for solving strongly convex and convex composite optimization problems, respectively. RPF-SFISTA has the major advantage that it requires no knowledge of both the strong convexity parameter of the entire composite objective and the Lipschitz constant of … Read more

Accessible Theoretical Complexity of the Restarted Primal-Dual Hybrid Gradient Method for Linear Programs with Unique Optima

The restarted primal-dual hybrid gradient method (rPDHG) has recently emerged as an important tool for solving large-scale linear programs (LPs). For LPs with unique optima, we present an iteration bound of \(\widetilde{O}\left(\kappa\Phi\cdot\ln\left(\frac{\|w^*\|}{\varepsilon}\right)\right)\), where \(\varepsilon\) is the target tolerance, \(\kappa\) is the standard matrix condition number, \(\|w^*\|\) is the norm of the optimal solution, and \(\Phi\) … Read more

Dual Spectral Projected Gradient Method for Generalized Log-det Semidefinite Programming

Log-det semidefinite programming (SDP) problems are optimization problems that often arise from Gaussian graphic models. A log-det SDP problem with an l1-norm term has been examined in many methods, and the dual spectral projected gradient (DSPG) method by Nakagaki et al.~in 2020 is designed to efficiently solve the dual problem of the log-det SDP by … Read more

An inertial projective splitting method for the sum of two maximal monotone operators

We propose a projective splitting type method to solve the problem of finding a zero of the sum of two maximal monotone operators. Our method considers inertial and relaxation steps, and also allows inexact solutions of the proximal subproblems within a relative-error criterion.We study the asymptotic convergence of the method, as well as its iteration-complexity. … Read more

Second-Order Contingent Derivatives: Computation and Application

It is known that second-order (Studniarski) contingent derivatives can be used to compute tangents to the solution set of a generalized equation when standard (first-order) regularity conditions are absent, but relaxed (second-order) regularity conditions are fulfilled. This fact, roughly speaking, is only relevant in practice as long as the computation of second-order contingent derivatives itself … Read more

Double-proximal augmented Lagrangian methods with improved convergence condition

In this paper, we propose a novel double-proximal augmented Lagrangian method(DP-ALM) for solving a family of linearly constrained convex minimization problems whose objective function is not necessarily smooth. This DP-ALM not only enjoys a flexible dual stepsize, but also contains a proximal subproblem with relatively smaller proximal parameter. By a new prediction-correction reformulation for this … Read more