A Graphical Global Optimization Framework for Parameter Estimation of Statistical Models with Nonconvex Regularization Functions

Optimization problems with norm-bounding constraints appear in various applications, from portfolio optimization to machine learning, feature selection, and beyond. A widely used variant of these problems relaxes the norm-bounding constraint through Lagrangian relaxation and moves it to the objective function as a form of penalty or regularization term. A challenging class of these models uses … Read more

A Fast Newton Method Under Local Lipschitz Smoothness

A new, fast second-order method is proposed that achieves the optimal \(\mathcal{O}\left(|\log(\epsilon)|\epsilon^{-3/2}\right) \) complexity to obtain first-order $\epsilon$-stationary points. Crucially, this is deduced without assuming the standard global Lipschitz Hessian continuity condition, but onlyusing an appropriate local smoothness requirement. The algorithm exploits Hessian information to compute a Newton step and a negative curvature step when … Read more

An Adaptive Stochastic Dual Progressive Hedging Algorithm for Stochastic Programming

The Progressive Hedging (PH) algorithm is one of the cornerstones in large-scale stochastic programming. However, its traditional development requires that all scenario subproblems are solved per iteration, and a probability distribution with finitely many outcomes. This paper introduces a stochastic dual PH algorithm (SDPH) to overcome these challenges. We introduce an adaptive sampling process and … Read more

Quadratic Convex Reformulations for MultiObjective Binary Quadratic Programming

Multiobjective binary quadratic programming refers to optimization problems involving multiple quadratic – potentially non-convex – objective functions and a feasible set that includes binary constraints on the variables. In this paper, we extend the well-established Quadratic Convex Reformulation technique, originally developed for single-objective binary quadratic programs, to the multiobjective setting. We propose a branch-and-bound algorithm … Read more

A double-accelerated proximal augmented Lagrangian method with applications in signal reconstruction

The Augmented Lagrangian Method (ALM), firstly proposed in 1969, remains a vital framework in large-scale constrained optimization. This paper addresses a linearly constrained composite convex minimization problem and presents a general proximal ALM that incorporates both Nesterov acceleration and relaxed acceleration, while enjoying indefinite proximal terms. Under mild assumptions (potentially without requiring prior knowledge of … Read more

Negative Stepsizes Make Gradient-Descent-Ascent Converge

Efficient computation of min-max problems is a central question in optimization, learning, games, and controls. Arguably the most natural algorithm is gradient-descent-ascent (GDA). However, since the 1970s, conventional wisdom has argued that GDA fails to converge even on simple problems. This failure spurred an extensive literature on modifying GDA with additional building blocks such as … Read more

Paving the Way for More Accessible Cancer Care in Low-Income Countries with Optimization

Cancers are a growing cause of morbidity and mortality in low-income countries. Geographic access plays a key role in both timely diagnosis and successful treatment. In areas lacking well-developed road networks, seasonal weather events can lengthen already long travel times to access care. Expanding facilities to offer cancer care is expensive and requires staffing by … Read more

Optimization over Trained (and Sparse) Neural Networks: A Surrogate within a Surrogate

We can approximate a constraint or an objective function that is uncertain or nonlinear with a neural network that we embed in the optimization model. This approach, which is known as constraint learning, faces the challenge that optimization models with neural network surrogates are harder to solve. Such difficulties have motivated studies on model reformulation, … Read more

The 1-persistency of the clique relaxation of the stable set polytope: a focus on some forbidden structures

A polytope $P\subseteq [0,1]^n$ is said to have the \emph{persistency} property if for every vector $c\in \R^{n}$ and every $c$-optimal point $x\in P$, there exists a $c$-optimal integer point $y\in P\cap \{0,1\}^n$ such that $x_i = y_i$ for each $i \in \{1,\dots,n\}$ with $x_i \in \{0,1\}$. In this paper, we consider a relaxation of the … Read more

On image space transformations in multiobjective optimization

This paper considers monotone transformations of the objective space of multiobjective optimization problems which leave the set of efficient points invariant. Under mild assumptions, for the standard ordering cone we show that such transformations must be component-wise transformations. The same class of transformations also leaves the sets of weakly and of Geoffrion properly efficient points … Read more