Non-Convex Self-Concordant Functions: Practical Algorithms and Complexity Analysis

We extend the standard notion of self-concordance to non-convex optimization and develop a family of second-order algorithms with global convergence guarantees. In particular, two function classes – weakly self-concordant functions and F-based self-concordant functions – generalize the self-concordant framework beyond convexity, without assuming the Lipschitz continuity of the gradient or Hessian. For these function classes, … Read more

Iteration complexity of the Difference-of-Convex Algorithm for unconstrained optimization: a simple proof

We propose a simple proof of the worst-case iteration complexity for the Difference of Convex functions Algorithm (DCA) for unconstrained minimization, showing that the global rate of convergence of the norm of the objective function’s gradients at the iterates converge to zero like $o(1/k)$. A small example is also provided indicating that the rate cannot … Read more

Tilt Stability on Riemannian Manifolds with Application to Convergence Analysis of Generalized Riemannian Newton Method

We generalize tilt stability, a fundamental concept in perturbation analysis of optimization problems in Euclidean spaces, to the setting of Riemannian manifolds. We prove the equivalence of the following conditions: Riemannian tilt stability, Riemannian variational strong convexity, Riemannian uniform quadratic growth, local strong monotonicity of Riemannian subdifferential, strong metric regularity of Riemannian subdifferential, and positive … Read more

An efficient penalty decomposition algorithm for minimization over sparse symmetric sets

This paper proposes an improved quasi-Newton penalty decomposition algorithm for the minimization of continuously differentiable functions, possibly nonconvex, over sparse symmetric sets. The method solves a sequence of penalty subproblems approximately via a two-block decomposition scheme: the first subproblem admits a closed-form solution without sparsity constraints, while the second subproblem is handled through an efficient … Read more

A Majorization-Minimization approach for multiclass classification in a big data scenario

This work presents a novel optimization approach for training linear classifiers in multiclass classification tasks, when focusing on a regularized and smooth Weston-Watkins support vector machine (SVM) model. We propose a Majorization-Minimization (MM) algorithm to solve the resulting, Lipschitz-differentiable, optimization problem. To enhance scalability of the algorithm when tackling large datasets, we introduce an incremental … Read more

The Maximum Clique Problem under Adversarial Uncertainty: a min-max approach

We analyze the problem of identifying large cliques in graphs that are affected by adversarial uncertainty. More specifically, we consider a new formulation, namely the adversarial maximum clique problem, which extends the classical maximum-clique problem to graphs with edges strategically perturbed by an adversary. The proposed mathematical model is thus formulated as a two-player zero-sum … Read more

A speed up strategy for gradient methods

In this paper, we propose a new acceleration strategy for gradient-based methods applied to strictly convex Quadratic Programming (QP) problems. The strategy consists in performing, at selected iterations, minimization steps along alternative descent directions or even within low-dimensional affine subspaces. In particular, considering the contribution of the linear and quadratic part of the objective function … Read more

An Inexact Modified Quasi-Newton Method for Nonsmooth Regularized Optimization

We introduce method iR2N, a modified proximal quasi-Newton method for minimizing the sum of a \(C^1\) function \(f\) and a lower semi-continuous prox-bounded \(h\) that permits inexact evaluations of \(f\), \(\nabla f\) and of the relevant proximal operators. Both \(f\) and \(h\) may be nonconvex. In applications where the proximal operator of \(h\) is not … Read more

The Convexity Zoo: A Taxonomy of Function Classes in Optimization

The tractability of optimization problems depends critically on structural properties of the objective function. Convexity guarantees global optimality of local solutions and enables polynomial-time algorithms under mild assumptions, but many problems arising in modern applications—particularly in machine learning—are inherently nonconvex. Remarkably, a large class of such problems remains amenable to efficient optimization due to additional … Read more

New Location Science Models with Applications to UAV-Based Disaster Relief

Natural and human-made disasters can cause severe devastation and claim thousands of lives worldwide. Therefore, developing efficient methods for disaster response and management is a critical task for relief teams. One of the most essential components of effective response is the rapid collection of information about affected areas, damages, and victims. More data translates into … Read more