A speed up strategy for gradient methods

In this paper, we propose a new acceleration strategy for gradient-based methods applied to strictly convex Quadratic Programming (QP) problems. The strategy consists in performing, at selected iterations, minimization steps along alternative descent directions or even within low-dimensional affine subspaces. In particular, considering the contribution of the linear and quadratic part of the objective function … Read more

An Inexact Modified Quasi-Newton Method for Nonsmooth Regularized Optimization

We introduce method iR2N, a modified proximal quasi-Newton method for minimizing the sum of a \(C^1\) function \(f\) and a lower semi-continuous prox-bounded \(h\) that permits inexact evaluations of \(f\), \(\nabla f\) and of the relevant proximal operators. Both \(f\) and \(h\) may be nonconvex. In applications where the proximal operator of \(h\) is not … Read more

The Convexity Zoo: A Taxonomy of Function Classes in Optimization

The tractability of optimization problems depends critically on structural properties of the objective function. Convexity guarantees global optimality of local solutions and enables polynomial-time algorithms under mild assumptions, but many problems arising in modern applications—particularly in machine learning—are inherently nonconvex. Remarkably, a large class of such problems remains amenable to efficient optimization due to additional … Read more

New Location Science Models with Applications to UAV-Based Disaster Relief

Natural and human-made disasters can cause severe devastation and claim thousands of lives worldwide. Therefore, developing efficient methods for disaster response and management is a critical task for relief teams. One of the most essential components of effective response is the rapid collection of information about affected areas, damages, and victims. More data translates into … Read more

A Proximal-Gradient Method for Solving Regularized Optimization Problems with General Constraints

We propose, analyze, and test a proximal-gradient method for solving regularized optimization problems with general constraints. The method employs a decomposition strategy to compute trial steps and uses a merit function to determine step acceptance or rejection. Under various assumptions, we establish a worst-case iteration complexity result, prove that limit points are first-order KKT points, … Read more

Primal-dual resampling for solution validation in convex stochastic programming

Suppose we wish to determine the quality of a candidate solution to a convex stochastic program in which the objective function is a statistical functional parameterized by the decision variable and known deterministic constraints may be present. Inspired by stopping criteria in primal-dual and interior-point methods, we develop cancellation theorems that characterize the convergence of … Read more

Facial reduction for nice (and non-nice) convex programs

Consider the primal problem of minimizing the sum of two closed proper convex functions \(f\) and \(g\). If the relative interiors of the domains of \(f\) and \(g\) intersect, then the primal problem and its corresponding Fenchel dual satisfy strong duality. When these relative interiors fail to intersect, pathologies and numerical difficulties may occur. In … Read more

Stability analysis of parameterized models relative to nonconvex constraints

For solution mappings of parameterized models (such as optimization problems, variational inequalities, and generalized equations), standard stability inevitably fails as the parameter approaches the boundary of the feasible domain. One remedy is relative stability restricted to a constraint set (e.g., the feasible domain), which is our focus in this paper. We establish generalized differentiation criteria … Read more

An Elementary Proof of the Near Optimality of LogSumExp Smoothing

We consider the design of smoothings of the (coordinate-wise) max function in $\mathbb{R}^d$ in the infinity norm. The LogSumExp function $f(x)=\ln(\sum^d_i\exp(x_i))$ provides a classical smoothing, differing from the max function in value by at most $\ln(d)$. We provide an elementary construction of a lower bound, establishing that every overestimating smoothing of the max function must … Read more

Robust optimality for nonsmooth mathematical programs with equilibrium constraints under data uncertainty

We develop a unified framework for robust nonsmooth optimization problems with equilibrium constraints (UNMPEC). As a foundation, we study a robust nonsmooth nonlinear program with uncertainty in both the objective function and the inequality constraints (UNP). Using Clarke subdifferentials, we establish Karush–Kuhn–Tucker (KKT)–type necessary optimality conditions under an extended no–nonzero–abnormal–multiplier constraint qualification (ENNAMCQ). When the … Read more