Learning to Choose Branching Rules for Nonconvex MINLPs

Outer-approximation-based branch-and-bound is a common algorithmic framework for solving MINLPs (mixed-integer nonlinear programs) to global optimality, with branching variable selection critically influencing overall performance. In modern global MINLP solvers, it is unclear whether branching on fractional integer variables should be prioritized over spatial branching on variables, potentially continuous, that show constraint violations, with different solvers … Read more

Objective-Function Free Multi-Objective Optimization: Rate of Convergence and Performance of an Adagrad-like algorithm

We propose an Adagrad-like algorithm for multi-objective unconstrained optimization that relies on the computation of a common descent direction only. Unlike classical local algorithms for multi-objective optimization, our approach does not rely on the dominance property to accept new iterates, which allows for a flexible and function-free optimization framework. New points are obtained using an … Read more

An adaptive line-search-free multiobjective gradient method and its iteration-complexity analysis

This work introduces an Adaptive Line-Search-Free Multiobjective Gradient (AMG) method for solving smooth multiobjective optimization problems. The proposed approach automatically adjusts stepsizes based on steepest descent directions, promoting robustness with respect to stepsize choice while maintaining low computational cost. The method is specifically tailored to the multiobjective setting and does not rely on function evaluations, … Read more

On Approximate Computation of Critical Points

We show that computing even very coarse approximations of critical points is intractable for simple classes of nonconvex functions. More concretely, we prove that if there exists a polynomial-time algorithm that takes as input a polynomial in \(n\) variables of constant degree (as low as three) and outputs a point whose gradient has Euclidean norm … Read more

Asymptotically tight Lagrangian dual of smooth nonconvex problems via improved error bound of Shapley-Folkman Lemma

In convex geometry, the Shapley–Folkman Lemma asserts that the nonconvexity of a Minkowski sum of $n$-dimensional bounded nonconvex sets does not accumulate once the number of summands exceeds the dimension $n$, and thus the sum becomes approximately convex. Originally published by Starr in the context of quasi-equilibrium in nonconvex market models in economics, the lemma … Read more

Riemannian optimization with finite-difference gradient approximations

Derivative-free Riemannian optimization (DFRO) aims to minimize an objective function using only function evaluations, under the constraint that the decision variables lie on a Riemannian manifold. The rapid increase in problem dimensions over the years calls for computationally cheap DFRO algorithms, that is, algorithms requiring as few function evaluations and retractions as possible. We propose … Read more

Iteration complexity of the Difference-of-Convex Algorithm for unconstrained optimization: a simple proof

We propose a simple proof of the worst-case iteration complexity for the Difference of Convex functions Algorithm (DCA) for unconstrained minimization, showing that the global rate of convergence of the norm of the objective function’s gradients at the iterates converge to zero like $o(1/k)$. A small example is also provided indicating that the rate cannot … Read more

Non-convex stochastic compositional optimization under heavy-tailed noise

This paper investigates non-convex stochastic compositional optimization under heavy-tailed noise, where the stochastic noise exhibits bounded $p$th moment with $p\in(1,2]$. The main challenges arise from biased gradient estimates of the objective and the violation of the standard bounded-variance assumption. To address these issues, we propose a generic algorithm framework of Normalized Stochastic Compositional Gradient methods … Read more

Tilt Stability on Riemannian Manifolds with Application to Convergence Analysis of Generalized Riemannian Newton Method

We generalize tilt stability, a fundamental concept in perturbation analysis of optimization problems in Euclidean spaces, to the setting of Riemannian manifolds. We prove the equivalence of the following conditions: Riemannian tilt stability, Riemannian variational strong convexity, Riemannian uniform quadratic growth, local strong monotonicity of Riemannian subdifferential, strong metric regularity of Riemannian subdifferential, and positive … Read more

An efficient penalty decomposition algorithm for minimization over sparse symmetric sets

This paper proposes an improved quasi-Newton penalty decomposition algorithm for the minimization of continuously differentiable functions, possibly nonconvex, over sparse symmetric sets. The method solves a sequence of penalty subproblems approximately via a two-block decomposition scheme: the first subproblem admits a closed-form solution without sparsity constraints, while the second subproblem is handled through an efficient … Read more