Adaptive Algorithms for Robust Phase Retrieval

This paper considers the robust phase retrieval, which can be cast as a nonsmooth and nonconvex optimization problem. We propose two first-order algorithms with adaptive step sizes: the subgradient algorithm (AdaSubGrad) and the inexact proximal linear algorithm (AdaIPL). Our contribution lies in the novel design of adaptive step sizes based on quantiles of the absolute … Read more

Relaxed Proximal Point Algorithm: Tight Complexity Bounds and Acceleration without Momentum

\(\) In this paper, we focus on the relaxed proximal point algorithm (RPPA) for solving convex (possibly nonsmooth) optimization problems. We conduct a comprehensive study on three types of relaxation schedules: (i) constant schedule with relaxation parameter \(\alpha_k\equiv \alpha \in (0, \sqrt{2}]\), (ii) the dynamic schedule put forward by Teboulle and Vaisbourd [TV23], and (iii) … Read more

TRFD: A derivative-free trust-region method based on finite differences for composite nonsmooth optimization

\(\) In this work we present TRFD, a derivative-free trust-region method based on finite differences for minimizing composite functions of the form \(f(x)=h(F(x))\), where \(F\) is a black-box function assumed to have a Lipschitz continuous Jacobian, and \(h\) is a known convex Lipschitz function, possibly nonsmooth. The method approximates the Jacobian of \(F\) via forward … Read more

Tuning-Free Bilevel Optimization: New Algorithms and Convergence Analysis

\(\) Bilevel optimization has recently attracted considerable attention due to its abundant applications in machine learning problems. However, existing methods rely on prior knowledge of problem parameters to determine stepsizes, resulting in significant effort in tuning stepsizes when these parameters are unknown. In this paper, we propose two novel tuning-free algorithms, D-TFBO and S-TFBO. D-TFBO … Read more

A Generalization Result for Convergence in Learning-to-Optimize

Convergence in learning-to-optimize is hardly studied, because conventional convergence guarantees in optimization are based on geometric arguments, which cannot be applied easily to learned algorithms. Thus, we develop a probabilistic framework that resembles deterministic optimization and allows for transferring geometric arguments into learning-to-optimize. Our main theorem is a generalization result for parametric classes of potentially … Read more

Spanning and Splitting: Integer Semidefinite Programming for the Quadratic Minimum Spanning Tree Problem

In the quadratic minimum spanning tree problem (QMSTP) one wants to find the minimizer of a quadratic function over all possible spanning trees of a graph. We give two formulations of the QMSTP as mixed-integer semidefinite programs exploiting the algebraic connectivity of a graph. Based on these formulations, we derive a doubly nonnegative relaxation for … Read more

Lipschitz-free Projected Subgradient Method with Time-varying Step-size

We introduce a novel time-varying step-size for the classical projected subgradient method, offering optimal ergodic convergence. Importantly, this approach does not depend on the Lipschitz assumption of the objective function, thereby broadening the convergence result of projected subgradient method to non-Lipschitz case. Article Download View Lipschitz-free Projected Subgradient Method with Time-varying Step-size

Optimism in the Face of Ambiguity Principle for Multi-Armed Bandits

\(\) Follow-The-Regularized-Leader (FTRL) algorithms often enjoy optimal regret for adversarial as well as stochastic bandit problems and allow for a streamlined analysis. However, FTRL algorithms require the solution of an optimization problem in every iteration and are thus computationally challenging. In contrast, Follow-The-Perturbed-Leader (FTPL) algorithms achieve computational efficiency by perturbing the estimates of the rewards … Read more

Connectivity via convexity: Bounds on the edge expansion in graphs

Convexification techniques have gained increasing interest over the past decades. In this work, we apply a recently developed convexification technique for fractional programs by He, Liu and Tawarmalani (2024) to the problem of determining the edge expansion of a graph. Computing the edge expansion of a graph is a well-known, difficult combinatorial problem that seeks … Read more

Efficient parameter-free restarted accelerated gradient methods for convex and strongly convex optimization

This paper develops a new parameter-free restarted method, namely RPF-SFISTA, and a new parameter-free aggressive regularization method, namely A-REG, for solving strongly convex and convex composite optimization problems, respectively. RPF-SFISTA has the major advantage that it requires no knowledge of both the strong convexity parameter of the entire composite objective and the Lipschitz constant of … Read more