A Foundational Perspective for Partitional Clustering on Networks

This study presents a theoretical analysis of partitional clustering on networks. Different versions of the problem are studied considering different assignment schemes (hard and soft) and different objective functions. Cluster centers are not restricted to only the set of nodes, it is assumed that centers can also be at the edges of the network. Four … Read more

Mathematical programs with complementarity constraints and application to hyperparameter tuning for nonlinear support vector machines

We consider the Mathematical Program with Complementarity Constraints (MPCC). One of the main challenges in solving this problem is the systematic failure of standard Constraint Qualifications (CQs). Carefully accounting for the combinatorial nature of the complementarity constraints, tractable versions of the Mangasarian Fromovitz Constraint Qualification (MFCQ) have been designed and widely studied in the literature. … Read more

Tight Semidefinite Relaxations for Verifying Robustness of Neural Networks

For verifying the safety of neural networks (NNs), Fazlyab et al. (2019) introduced a semidefinite programming (SDP) approach called DeepSDP. This formulation can be viewed as the dual of the SDP relaxation for a problem formulated as a quadratically constrained quadratic program (QCQP). While SDP relaxations of QCQPs generally provide approximate solutions with some gaps, … Read more

An extrapolated and provably convergent algorithm for nonlinear matrix decomposition with the ReLU function

Nonlinear matrix decomposition (NMD) with the ReLU function, denoted ReLU-NMD, is the following problem: given a sparse, nonnegative matrix \(X\) and a factorization rank \(r\), identify a rank-\(r\) matrix \(\Theta\) such that \(X\approx \max(0,\Theta)\). This decomposition finds application in data compression, matrix completion with entries missing not at random, and manifold learning. The standard ReLU-NMD … Read more

A Rank-One-Update Method for the Training of Support Vector Machines

This paper considers convex quadratic programs associated with the training of support vector machines (SVM). Exploiting the special structure of the SVM problem a new type of active set method with long cycles and stable rank-one-updates is proposed and tested (CMU: cycling method with updates). The structure of the problem allows for a repeated simple … Read more

Spherical Support Vector Machine for Interval-Valued Data

In this work we propose a generalization of the Spherical Support Vector Machine method, in which the separator is a sphere, applied to Interval-valued data. This type of data belongs to a more general class, known as Symbolic Data, for which features are described by sets, intervals or histograms instead of classic arrays. This paradigm … Read more

prunAdag: an adaptive pruning-aware gradient method

A pruning-aware adaptive gradient method is proposed which classifies the variables in two sets before updating them using different strategies. This technique extends the “relevant/irrelevant” approach of Ding (2019) and Zimmer et al. (2022) and allows a posteriori sparsification of the solution of model parameter fitting problems. The new method is proved to be convergent … Read more

Multiple Kernel Learning-Aided Column-and-Constraint Generation Method

Two-stage robust optimization (two-stage RO), due to its ability to balance robustness and flexibility, has been widely used in various fields for decision-making under uncertainty. This paper proposes a multiple kernel learning (MKL)-aided column-and-constraint generation (CCG) method to address this issue in the context of data-driven decision optimization, and releases a corresponding registered Julia package, … Read more

Computing Weak Counterfactual Explanations for Linear Optimization: A New Class of Bilevel Models and a Tailored Penalty Alternating Direction Method

Explainable artificial intelligence is one of the most important trends in modern machine-learning research. The idea is to explain the outcome of a model by presenting a certain change in the input of the model so that the outcome changes significantly. In this paper, we study this question for linear optimization problems as an automated … Read more