Second-Order Optimality Conditions for Bilevel Optimization Problems Using Parabolic Directional Derivatives

This paper studies the second-order properties of a class of inequality-constrained bilevel programming problems. First-order optimality conditions for the existence of solutions to bilevel optimization problems are derived using the first-order directional derivative of the optimal solution function of the lower-level problem in the seminal paper by Dempe (1992). In this work, we prove that … Read more

Convergence of the Frank-Wolfe Algorithm for Monotone Variational Inequalities

We consider the Frank-Wolfe algorithm for solving variational inequalities over compact, convex sets under a monotone \(C^1\) operator and vanishing, nonsummable step sizes. We introduce a continuous-time interpolation of the discrete iteration and use tools from dynamical systems theory to analyze its asymptotic behavior. This allows us to derive convergence results for the original discrete … Read more

Negative Momentum for Convex-Concave Optimization

This paper revisits momentum in the context of min-max optimization. Momentum is a celebrated mechanism for accelerating gradient dynamics in settings like convex minimization, but its direct use in min-max optimization makes gradient dynamics diverge. Surprisingly, Gidel et al. 2019 showed that negative momentum can help fix convergence. However, despite these promising initial results and … Read more

A Nesterov-Accelerated Primal-Dual Splitting Algorithm for Convex Nonsmooth Optimization

We investigate the integration of Nesterov-type acceleration into primal-dual methods for structured convex optimization. While proximal splitting algorithms efficiently handle composite problems of the form min_x f(x) + g(x) + h(Kx), accelerating their convergence with respect to the smooth term f is notoriously challenging due to the rotational dynamics in the primal-dual space. In this … Read more

A unified framework for inexact adaptive stepsizes in the gradient methods, the conjugate gradient methods and the quasi-Newton methods for strictly convex quadratic optimization

The inexact adaptive stepsizes for the conjugate gradient method and  the quasi-Newton method are very rare. The exact stepsizes in the gradient method, the conjugate gradient method and the  quasi-Newton method for strictly convex quadratic optimization have a unified framework, while the unified framework for inexact adaptive stepsizes  in the gradient method, the conjugate gradient … Read more

Accuracy Certificates for Convex Optimization at Accelerated Rates via Primal-Dual Averaging

Many works in convex optimization provide rates for achieving a small primal gap. However, this quantity is typically unavailable in practice. In this work, we show that solving a regularized surrogate with algorithms based on simple primal-dual averaging provides non-asymptotic convergence guarantees for a computable optimality certificate. We first analyze primal and dual methods based … Read more

Decomposition-Based Reformulation of Nonseparable Quadratic Expressions in Convex MINLP

In this paper, we present a reformulation technique for convex mixed-integer nonlinear programming (MINLP) problems with nonseparable quadratic terms. For each convex non-diagonal matrix that defines quadratic expressions in the problem, we show that an eigenvalue or LDLT decomposition can be performed to transform the quadratic expressions into convex additively separable constraints. The reformulated problem … Read more

An Inexact Trust-Region Method for Structured Nonsmooth Optimization with Application to Risk-Averse Stochastic Programming

We develop a trust-region method for efficiently minimizing the sum of a smooth function, a nonsmooth convex function, and the composition of a finite-valued support function with a smooth function. Optimization problems with this structure arise in numerous applications including risk-averse stochastic programming and subproblems for nonsmooth penalty nonlinear programming methods. Our method permits the … Read more

A Gradient Sampling Algorithm for Noisy Nonsmooth Optimization

An algorithm is proposed, analyzed, and tested for minimizing locally Lipschitz objective functions that may be nonconvex and/or nonsmooth. The algorithm, which is built upon the gradient-sampling methodology, is designed specifically for cases when objective function and generalized gradient values might be subject to bounded uncontrollable errors. Similarly to state-of-the-art guarantees for noisy smooth optimization … Read more

Bregman Regularized Proximal Point Methods for Computing Projected Solutions of Quasi-equilibrium Problems

In this paper, we propose two Bregman regularized proximal point methods that provide flexibility to compute projected solutions for quasi-equilibrium problems. Each method has one Bregman projection onto the feasible set and the regularized equilibrium problem. Under standard assumptions, we prove that the methods are well-defined and that the sequences they generate converge to a … Read more