Optimized methods for composite optimization: a reduction perspective

Recent advances in convex optimization have leveraged computer-assisted proofs to develop optimized first-order methods that improve over classical algorithms. However, each optimized method is specially tailored for a particular problem setting, and it is a well-documented challenge to extend optimized methods to other settings due to their highly bespoke design and analysis. We provide a general … Read more

First-order methods for stochastic and finite-sum convex optimization with deterministic constraints

In this paper, we study a class of stochastic and finite-sum convex optimization problems with deterministic constraints. Existing methods typically aim to find an \(\epsilon\)-expectedly feasible stochastic optimal solution, in which the expected constraint violation and expected optimality gap are both within a prescribed tolerance ϵ. However, in many practical applications, constraints must be nearly … Read more

A Variational Analysis Approach for Bilevel Hyperparameter Optimization with Sparse Regularization

We study a bilevel optimization framework for hyperparameter learning in variational models, with a focus on sparse regression and classification tasks. In particular, we consider a weighted elastic-net regularizer, where feature-wise regularization parameters are learned through a bilevel formulation. A key novelty of our approach is the use of a Forward-Backward (FB) reformulation of the … Read more

Lipschitz Stability for a Class of Parametric Optimization Problems with Polyhedral Feasible Set Mapping

This paper is devoted to the Lipschitz analysis of the solution sets and optimal values for a class of parametric optimization problems involving a polyhedral feasible set mapping and a quadratic objective function with arametric linear part. Recall that a multifunction is said to be polyhedral if its graph is the union of finitely many polyhedral … Read more

Efficient QUIC-Based Damped Inexact Iterative Reweighting for Sparse Inverse Covariance Estimation with Nonconvex Partly Smooth Regularization

In this paper, we study sparse inverse covariance matrix estimation incorporating partly smooth nonconvex regularizers. To solve the resulting regularized log-determinant problem, we develop DIIR-QUIC—a novel Damped Inexact Iteratively Reweighted algorithm based on QUadratic approximate Inverse Covariance (QUIC) method. Our approach generalizes the classic iteratively reweighted \(\ell_1\) scheme through damped fixed-point updates. A key novelty … Read more

Dual certificates of primal cone membership

We discuss easily verifiable cone membership certificates, that is, certificates proving relations of the form \( b\in K \) for convex cones \(K\) that consist of vectors in the dual cone \(K^*\). Vectors in the dual cone are usually associated with separating hyperplanes, and so they are interpreted as certificates of non-membership in the standard … Read more

Asymptotically Fair and Truthful Allocation of Public Goods

We study the fair and truthful allocation of m divisible public items among n agents, each with distinct preferences for the items. To aggregate agents’ preferences fairly, we focus on finding a core solution. For divisible items, a core solution always exists and can be calculated by maximizing the Nash welfare objective. However, such a … Read more

Lipschitz-Free Mirror Descent Methods for Non-Smooth Optimization Problems

The part of the analysis of the convergence rate of the mirror descent method that is connected with the adaptive time-varying step size rules due to Alkousa et al. (MOTOR 2024, pp. 3-18) is corrected. Moreover, a Lipschitz-free mirror descent method that achieves weak ergodic convergence is presented, generalizing the convergence results of the mirror … Read more

Gradient Methods with Online Scaling Part I. Theoretical Foundations

This paper establishes the theoretical foundations of the online scaled gradient methods (OSGM), a framework that utilizes online learning to adapt stepsizes and provably accelerate first-order methods. OSGM quantifies the effectiveness of a stepsize by a feedback function motivated from a convergence measure and uses the feedback to adjust the stepsize through an online learning … Read more

A symmetric extrapolated proximal alternating predictor-corrector method for saddle-point problems

The proximal alternating predictor-corrector (PAPC) method is a widely used first-order algorithm for solving convex-concave saddle-point problems involving both smooth and nonsmooth components. Unlike the primal-dual hybrid gradient (PDHG) method, which incorporates an extrapolation step with parameter $\theta \in (0,1]$ to improve convergence, the existing convergence analysis of PAPC has been limited to the case … Read more