A Fast Newton Method Under Local Lipschitz Smoothness

A new, fast second-order method is proposed that achieves the optimal \(\mathcal{O}\left(|\log(\epsilon)|\epsilon^{-3/2}\right) \) complexity to obtain first-order $\epsilon$-stationary points. Crucially, this is deduced without assuming the standard global Lipschitz Hessian continuity condition, but onlyusing an appropriate local smoothness requirement. The algorithm exploits Hessian information to compute a Newton step and a negative curvature step when … Read more

An Adaptive Stochastic Dual Progressive Hedging Algorithm for Stochastic Programming

The Progressive Hedging (PH) algorithm is one of the cornerstones in large-scale stochastic programming. However, its traditional development requires that all scenario subproblems are solved per iteration, and a probability distribution with finitely many outcomes. This paper introduces a stochastic dual PH algorithm (SDPH) to overcome these challenges. We introduce an adaptive sampling process and … Read more

Quadratic Convex Reformulations for MultiObjective Binary Quadratic Programming

Multiobjective binary quadratic programming refers to optimization problems involving multiple quadratic – potentially non-convex – objective functions and a feasible set that includes binary constraints on the variables. In this paper, we extend the well-established Quadratic Convex Reformulation technique, originally developed for single-objective binary quadratic programs, to the multiobjective setting. We propose a branch-and-bound algorithm … Read more

A faster proximal-indefinite augmented Lagrangian method with O(1/k^2 ) convergence rate

The Augmented Lagrangian Method (ALM), firstly proposed in 1969, remains a vital framework in large-scale constrained optimization. This paper addresses a linearly constrained composite convex minimization problem and presents a general proximal ALM that incorporates both Nesterov acceleration and relaxed acceleration, while enjoying a proximal-indefinite term. Under mild assumptions (potentially without requiring prior knowledge of … Read more

Negative Stepsizes Make Gradient-Descent-Ascent Converge

Efficient computation of min-max problems is a central question in optimization, learning, games, and controls. Arguably the most natural algorithm is gradient-descent-ascent (GDA). However, since the 1970s, conventional wisdom has argued that GDA fails to converge even on simple problems. This failure spurred an extensive literature on modifying GDA with additional building blocks such as … Read more

Paving the Way for More Accessible Cancer Care in Low-Income Countries with Optimization

Cancers are a growing cause of morbidity and mortality in low-income countries. Geographic access plays a key role in both timely diagnosis and successful treatment. In areas lacking well-developed road networks, seasonal weather events can lengthen already long travel times to access care. Expanding facilities to offer cancer care is expensive and requires staffing by … Read more

Optimization over Trained (and Sparse) Neural Networks: A Surrogate within a Surrogate

In constraint learning, we use a neural network as a surrogate for part of the constraints or of the objective function of an optimization model. However, the tractability of the resulting model is heavily influenced by the size of the neural network used as a surrogate. One way to obtain a more tractable surrogate is … Read more

The 1-persistency of the clique relaxation of the stable set polytope: a focus on some forbidden structures

A polytope $P\subseteq [0,1]^n$ is said to have the \emph{persistency} property if for every vector $c\in \R^{n}$ and every $c$-optimal point $x\in P$, there exists a $c$-optimal integer point $y\in P\cap \{0,1\}^n$ such that $x_i = y_i$ for each $i \in \{1,\dots,n\}$ with $x_i \in \{0,1\}$. In this paper, we consider a relaxation of the … Read more

On image space transformations in multiobjective optimization

This paper considers monotone transformations of the objective space of multiobjective optimization problems which leave the set of efficient points invariant. Under mild assumptions, for the standard ordering cone we show that such transformations must be component-wise transformations. The same class of transformations also leaves the sets of weakly and of Geoffrion properly efficient points … Read more

An adaptive single-loop stochastic penalty method for nonconvex constrained stochastic optimization

Adaptive update schemes for penalty parameters are crucial to enhancing robustness and practical applicability of penalty methods for constrained optimization. However, in the context of general constrained stochastic optimization, additional challenges arise due to the randomness introduced by adaptive penalty parameters. To address these challenges, we propose an Adaptive Single-loop Stochastic Penalty method (AdaSSP) in … Read more