Accelerated Stochastic Peaceman-Rachford Method for Empirical Risk Minimization

This work is devoted to studying an Accelerated Stochastic Peaceman-Rachford Splitting Method (AS-PRSM) for solving a family of structural empirical risk minimization problems. The objective function to be optimized is the sum of a possibly nonsmooth convex function and a finite-sum of smooth convex component functions. The smooth subproblem in AS-PRSM is solved by a stochastic gradient method using variance reduction … Read more

Exact Convergence Rates of Alternating Projections for Nontransversal Intersections

We study the exact convergence rate of the alternating projection method for the nontransversal intersection of a semialgebraic set and a linear subspace. If the linear subspace is a line, the exact rates are expressed by multiplicities of the defining polynomials of the semialgebraic set, or related power series. Our methods are also applied to … Read more

The Sharpe predictor for fairness in machine learning

In machine learning (ML) applications, unfair predictions may discriminate against a minority group. Most existing approaches for fair machine learning (FML) treat fairness as a constraint or a penalization term in the optimization of a ML model, which does not lead to the discovery of the complete landscape of the trade-offs among learning accuracy and … Read more

An abstract convergence framework with application to inertial inexact forward-backward methods

In this paper we introduce a novel abstract descent scheme suited for the minimization of proper and lower semicontinuous functions. The proposed abstract scheme generalizes a set of properties that are crucial for the convergence of several first-order methods designed for nonsmooth nonconvex optimization problems. Such properties guarantee the convergence of the full sequence of … Read more

Full-low evaluation methods for derivative-free optimization

We propose a new class of rigorous methods for derivative-free optimization with the aim of delivering efficient and robust numerical performance for functions of all types, from smooth to non-smooth, and under different noise regimes. To this end, we have developed Full-Low Evaluation methods, organized around two main types of iterations. The first iteration type … Read more

A Vectorization Scheme for Nonconvex Set Optimization Problems

In this paper, we study a solution approach for set optimization problems with respect to the lower set less relation. This approach can serve as a base for numerically solving set optimization problems by using established solvers from multiobjective optimization. Our strategy consists of deriving a parametric family of multiobjective optimization problems whose optimal solution … Read more

A Semismooth Newton-Type Method for the Nearest Doubly Stochastic Matrix Problem

We study a semismooth Newton-type method for the nearest doubly stochastic matrix problem where both differentiability and nonsingularity of the Jacobian can fail. The optimality conditions for this problem are formulated as a system of strongly semismooth functions. We show that the so-called local error bound condition does not hold for this system. Thus the … Read more

An optimization problem for dynamic OD trip matrix estimation on transit networks with different types of data collection units

Dynamic O-D trip matrices for public transportation systems provide a valuable source of information of the usage of public transportation system that may be used either by planners for a better design of the transportation facilities or by the administrations in order to characterize the efficiency of the transport system both in peak hours and … Read more

Regularized quasi-monotone method for stochastic optimization

We adapt the quasi-monotone method from Nesterov, Shikhman (2015) for composite convex minimization in the stochastic setting. For the proposed numerical scheme we derive the optimal convergence rate in terms of the last iterate, rather than on average as it is standard for subgradient methods. The theoretical guarantee for individual convergence of the regularized quasi-monotone … Read more

Rank computation in Euclidean Jordan algebras

Euclidean Jordan algebras are the abstract foundation for symmetriccone optimization. Every element in a Euclidean Jordan algebra has a complete spectral decomposition analogous to the spectral decomposition of a real symmetric matrix into rank-one projections. The spectral decomposition in a Euclidean Jordan algebra stems from the likewise-analogous characteristic polynomial of its elements, whose degree is … Read more