Complexity of normalized stochastic first-order methods with momentum under heavy-tailed noise

In this paper, we propose practical normalized stochastic first-order methods with Polyak momentum, multi-extrapolated momentum, and recursive momentum for solving unconstrained optimization problems. These methods employ dynamically updated algorithmic parameters and do not require explicit knowledge of problem-dependent quantities such as the Lipschitz constant or noise bound. We establish first-order oracle complexity results for finding … Read more

First-order methods for stochastic and finite-sum convex optimization with deterministic constraints

In this paper, we study a class of stochastic and finite-sum convex optimization problems with deterministic constraints. Existing methods typically aim to find an \(\epsilon\)-expectedly feasible stochastic optimal solution, in which the expected constraint violation and expected optimality gap are both within a prescribed tolerance ϵ. However, in many practical applications, constraints must be nearly … Read more

Lipschitz Stability for a Class of Parametric Optimization Problems with Polyhedral Feasible Set Mapping

This paper is devoted to the Lipschitz analysis of the solution sets and optimal values for a class of parametric optimization problems involving a polyhedral feasible set mapping and a quadratic objective function with arametric linear part. Recall that a multifunction is said to be polyhedral if its graph is the union of finitely many polyhedral … Read more

Novel closed-loop controllers for fractional linear quadratic tracking systems

A new method for finding closed-loop optimal controllers of fractional tracking quadratic optimal control problems is introduced. The optimality conditions for the fractional optimal control problem are obtained. Illustrative examples are presented to show the applicability and capabilities of the method. ArticleDownload View PDF

Efficient QUIC-Based Damped Inexact Iterative Reweighting for Sparse Inverse Covariance Estimation with Nonconvex Partly Smooth Regularization

In this paper, we study sparse inverse covariance matrix estimation incorporating partly smooth nonconvex regularizers. To solve the resulting regularized log-determinant problem, we develop DIIR-QUIC—a novel Damped Inexact Iteratively Reweighted algorithm based on QUadratic approximate Inverse Covariance (QUIC) method. Our approach generalizes the classic iteratively reweighted \(\ell_1\) scheme through damped fixed-point updates. A key novelty … Read more

Full Convergence of Regularized Methods for Unconstrained Optimization

Typically, the sequence of points generated by an optimization algorithm may have multiple limit points. Under convexity assumptions, however, (sub)gradient methods are known to generate a convergent sequence of points. In this paper, we extend the latter property to a broader class of algorithms. Specifically, we study unconstrained optimization methods that use local quadratic models … Read more

Worst-Case Complexity of High-Order Algorithms for Pareto-Front Reconstruction

In this paper, we are concerned with a worst-case complexity analysis of a-posteriori algorithms for unconstrained multiobjective optimization. Specifically, we propose an algorithmic framework that generates sets of points by means of $p$th-order models regularized with a power $p+1$ of the norm of the step. Through a tailored search procedure, several trial points are generated … Read more

Relaxations of KKT Conditions do not Strengthen Finite RLT and SDP-RLT Bounds for Nonconvex Quadratic Programs

We consider linear and semidefinite programming relaxations of nonconvex quadratic programs given by the reformulation-linearization technique (RLT relaxation), and the Shor relaxation combined with the RLT relaxation (SDP-RLT relaxation). By incorporating the first-order optimality conditions, a quadratic program can be formulated as an optimization problem with complementarity constraints. We investigate the effect of incorporating optimality … Read more

Asymptotically Fair and Truthful Allocation of Public Goods

We study the fair and truthful allocation of m divisible public items among n agents, each with distinct preferences for the items. To aggregate agents’ preferences fairly, we focus on finding a core solution. For divisible items, a core solution always exists and can be calculated by maximizing the Nash welfare objective. However, such a … Read more

Gradient Methods with Online Scaling Part I. Theoretical Foundations

This paper establishes the theoretical foundations of the online scaled gradient methods (OSGM), a framework that utilizes online learning to adapt stepsizes and provably accelerate first-order methods. OSGM quantifies the effectiveness of a stepsize by a feedback function motivated from a convergence measure and uses the feedback to adjust the stepsize through an online learning … Read more