Multi-group Support Vector Machines with measurement costs: a biobjective approach

Support Vector Machine has shown to have good performance in many practical classification settings. In this paper we propose, for multi-group classification, a biobjective optimization model in which we consider not only the generalization ability (modelled through the margin maximization), but also costs associated with the features. This cost is not limited to an economical … Read more

Detecting relevant variables and interactions for classification in Support Vector Machines

The widely used Support Vector Machine (SVM) method has shown to yield good results in Supervised Classification problems. The Binarized SVM (BSVM) is a variant which is able to automatically detect which variables are, by themselves, most relevant for the classifier. In this work, we extend the BSVM introduced by the authors to a method … Read more

Coordinate and Subspace Optimization Methods for Linear Least Squares with Non-Quadratic Regularization

This work addresses the problem of regularized linear least squares (RLS) with non-quadratic separable regularization. Despite being frequently deployed in many applications, the RLS problem is often hard to solve using standard iterative methods. In a recent work [10], a new iterative method called Parallel Coordinate Descent (PCD) was devised. We provide herein a convergence … Read more

New Korkin-Zolotarev Inequalities

Korkin and Zolotarev showed that if $$\sum_i A_i(x_i-\sum_{j>i} \alpha_{ij}x_j)^2$$ is the Lagrange expansion of a Korkin–Zolotarev reduced positive definite quadratic form, then $A_{i+1}\geq \frac{3}{4} A_i$ and $A_{i+2}\geq \frac{2}{3}A_i$. They showed that the implied bound $A_{5}\geq \frac{4}{9}A_1$ is not attained by any KZ-reduced form. We propose a method to optimize numerically over the set of Lagrange … Read more

A Column Generation Approach for Support Vector Machines

The widely used Support Vector Machine (SVM) method has shown to yield good results in Supervised Classification problems. Other methods such as Classification Trees have become more popular among practitioners than SVM thanks to their interpretability, which is an important issue in Data Mining. In this work, we propose an SVM-based method that automatically detects … Read more

A unified approach for inversion problems in intensity-modulated radiation therapy

We propose and study a unified model for handling dose constraints (physical dose, equivalent uniform dose (EUD), etc.) and radiation source constraints in a single mathematical framework based on the split feasibility problem. The model does not impose on the constraints an exogenous objective (merit) function. The optimization algorithm minimizes a weighted proximity function that … Read more

On the Quality of a Semidefinite Programming Bound for Sparse Principal Component Analysis

We examine the problem of approximating a positive, semidefinite matrix $\Sigma$ by a dyad $xx^T$, with a penalty on the cardinality of the vector $x$. This problem arises in sparse principal component analysis, where a decomposition of $\Sigma$ involving sparse factors is sought. We express this hard, combinatorial problem as a maximum eigenvalue problem, in … Read more

Survivable Energy Markets

In this paper we present a centralized model for managing, at the same time, the dayahead energy market and the reserve market in order to price through the market, beside energy, the overall cost of reliability and to assure that the power grid survives the failure of any single components, so to avoid extended blackouts. … Read more

Spectral Bounds for Sparse PCA: Exact & Greedy Algorithms

Sparse PCA seeks approximate sparse “eigenvectors” whose projections capture the maximal variance of data. As a cardinality-constrained and non-convex optimization problem, it is NP-hard and yet it is encountered in a wide range of applied fields, from bio-informatics to finance. Recent progress has focused mainly on continuous approximation and convex relaxation of the hard cardinality … Read more

Second-order convergence properties of trust-region methods using incomplete curvature information, with an application to multigrid optimization

Convergence properties of trust-region methods for unconstrained nonconvex optimization is considered in the case where information on the objective function’s local curvature is incomplete, in the sense that it may be restricted to a fixed set of “test directions” and may not be available at every iteration. It is shown that convergence to local “weak” … Read more