Lipschitz Stability for a Class of Parametric Optimization Problems with Polyhedral Feasible Set Mapping

This paper is devoted to the Lipschitz analysis of the solution sets and optimal values for a class of parametric optimization problems involving a polyhedral feasible set mapping and a quadratic objective function with arametric linear part. Recall that a multifunction is said to be polyhedral if its graph is the union of finitely many polyhedral … Read more

Efficient QUIC-Based Damped Inexact Iterative Reweighting for Sparse Inverse Covariance Estimation with Nonconvex Partly Smooth Regularization

In this paper, we study sparse inverse covariance matrix estimation incorporating partly smooth nonconvex regularizers. To solve the resulting regularized log-determinant problem, we develop DIIR-QUIC—a novel Damped Inexact Iteratively Reweighted algorithm based on QUadratic approximate Inverse Covariance (QUIC) method. Our approach generalizes the classic iteratively reweighted \(\ell_1\) scheme through damped fixed-point updates. A key novelty … Read more

Lipschitz-Free Mirror Descent Methods for Non-Smooth Optimization Problems

The part of the analysis of the convergence rate of the mirror descent method that is connected with the adaptive time-varying step size rules due to Alkousa et al. (MOTOR 2024, pp. 3-18) is corrected. Moreover, a Lipschitz-free mirror descent method that achieves weak ergodic convergence is presented, generalizing the convergence results of the mirror … Read more

Inverse Optimization via Learning Feasible Regions

We study inverse optimization (IO), where the goal is to use a parametric optimization program as the hypothesis class to infer relationships between input-decision pairs. Most of the literature focuses on learning only the objective function, as learning the constraint function (i.e., feasible regions) leads to nonconvex training programs. Motivated by this, we focus on … Read more

Subgradient Regularization: A Descent-Oriented Subgradient Method for Nonsmooth Optimization

In nonsmooth optimization, a negative subgradient is not necessarily a descent direction, making the design of convergent descent methods based on zeroth-order and first-order information a challenging task. The well-studied bundle methods and gradient sampling algorithms construct descent directions by aggregating subgradients at nearby points in seemingly different ways, and are often complicated or lack … Read more

On the Acceleration of Proximal Bundle Methods

The proximal bundle method (PBM) is a fundamental and computationally effective algorithm for solving nonsmooth optimization problems. In this paper, we present the first variant of the PBM for smooth objectives, achieving an accelerated convergence rate of \(\frac{1}{\sqrt{\epsilon}}\log(\frac{1}{\epsilon})\), where \(\epsilon\) is the desired accuracy. Our approach addresses an open question regarding the convergence guarantee of … Read more

A relaxed version of Ryu’s three-operator splitting method for structured nonconvex optimization

In this work, we propose a modification of Ryu’s splitting algorithm for minimizing the sum of three functions, where two of them are convex with Lipschitz continuous gradients, and the third is an arbitrary proper closed function that is not necessarily convex. The modification is essential to facilitate the convergence analysis, particularly in establishing a … Read more

A note on asynchronous Projective Splitting in Julia

While it has been mathematically proven that Projective Splitting (PS) algorithms can converge in parallel and distributed computing settings, to-date, it appears there were no open-source implementations of the full algorithm with asynchronous computing capabilities. This note fills this gap by providing a Julia implementation of the asynchronous PS algorithm of Eckstein and Combettes for … Read more

NonOpt: Nonconvex, Nonsmooth Optimizer

NonOpt, a C++ software package for minimizing locally Lipschitz objective functions, is presented. The software is intended primarily for minimizing objective functions that are nonconvex and/or nonsmooth. The package has implementations of two main algorithmic strategies: a gradient-sampling and a proximal-bundle method. Each algorithmic strategy can employ quasi-Newton techniques for accelerating convergence in practice. The … Read more

The Least Singular Value Function in Variational Analysis

Metric regularity is among the central concepts of nonlinear and variational analysis, constrained optimization, and their numerous applications. However, met- ric regularity can be elusive for some important ill-posed classes of problems includ- ing polynomial equations, parametric variational systems, smooth reformulations of complementarity systems with degenerate solutions, etc. The study of stability issues for such … Read more