Convex analysis for composite functions without K-convexity

Composite functions have been studied for over 40 years and appear in a wide range of optimization problems. Convex analysis of these functions focuses on (i) conditions for convexity of the function based on properties of its components, (ii) formulas for the convex conjugate of the function based on those of its components and (iii) … Read more

Convex duality contracts for production-grade mathematical optimization

Deploying mathematical optimization in autonomous production systems requires precise contracts for objects returned by an optimization solver. Unfortunately, conventions on dual solution and infeasibility certificates (rays) vary widely across solvers and classes of problems. This paper presents the theoretical framework used by MathOpt (a domain-specific language developed and used at Google) to unify these notions. … Read more

Revisiting Superlinear Convergence of Proximal Newton-Like Methods to Degenerate Solutions

We describe inexact proximal Newton-like methods for solving degenerate regularized optimization problems and for the broader problem of finding a zero of a generalized equation that is the sum of a continuous map and a maximal monotone operator. Superlinear convergence for both the distance to the solution set and a certain measure of first-order optimality … Read more

Asymptotically tight Lagrangian dual of smooth nonconvex problems via improved error bound of Shapley-Folkman Lemma

In convex geometry, the Shapley–Folkman Lemma asserts that the nonconvexity of a Minkowski sum of $n$-dimensional bounded nonconvex sets does not accumulate once the number of summands exceeds the dimension $n$, and thus the sum becomes approximately convex. Originally published by Starr in the context of quasi-equilibrium in nonconvex market models in economics, the lemma … Read more

Non-Convex Self-Concordant Functions: Practical Algorithms and Complexity Analysis

We extend the standard notion of self-concordance to non-convex optimization and develop a family of second-order algorithms with global convergence guarantees. In particular, two function classes – weakly self-concordant functions and F-based self-concordant functions – generalize the self-concordant framework beyond convexity, without assuming the Lipschitz continuity of the gradient or Hessian. For these function classes, … Read more

Iteration complexity of the Difference-of-Convex Algorithm for unconstrained optimization: a simple proof

We propose a simple proof of the worst-case iteration complexity for the Difference of Convex functions Algorithm (DCA) for unconstrained minimization, showing that the global rate of convergence of the norm of the objective function’s gradients at the iterates converge to zero like $o(1/k)$. A small example is also provided indicating that the rate cannot … Read more

Tilt Stability on Riemannian Manifolds with Application to Convergence Analysis of Generalized Riemannian Newton Method

We generalize tilt stability, a fundamental concept in perturbation analysis of optimization problems in Euclidean spaces, to the setting of Riemannian manifolds. We prove the equivalence of the following conditions: Riemannian tilt stability, Riemannian variational strong convexity, Riemannian uniform quadratic growth, local strong monotonicity of Riemannian subdifferential, strong metric regularity of Riemannian subdifferential, and positive … Read more

An efficient penalty decomposition algorithm for minimization over sparse symmetric sets

This paper proposes an improved quasi-Newton penalty decomposition algorithm for the minimization of continuously differentiable functions, possibly nonconvex, over sparse symmetric sets. The method solves a sequence of penalty subproblems approximately via a two-block decomposition scheme: the first subproblem admits a closed-form solution without sparsity constraints, while the second subproblem is handled through an efficient … Read more

A Majorization-Minimization approach for multiclass classification in a big data scenario

This work presents a novel optimization approach for training linear classifiers in multiclass classification tasks, when focusing on a regularized and smooth Weston-Watkins support vector machine (SVM) model. We propose a Majorization-Minimization (MM) algorithm to solve the resulting, Lipschitz-differentiable, optimization problem. To enhance scalability of the algorithm when tackling large datasets, we introduce an incremental … Read more

The Maximum Clique Problem under Adversarial Uncertainty: a min-max approach

We analyze the problem of identifying large cliques in graphs that are affected by adversarial uncertainty. More specifically, we consider a new formulation, namely the adversarial maximum clique problem, which extends the classical maximum-clique problem to graphs with edges strategically perturbed by an adversary. The proposed mathematical model is thus formulated as a two-player zero-sum … Read more