The Sharpe predictor for fairness in machine learning

In machine learning (ML) applications, unfair predictions may discriminate against a minority group. Most existing approaches for fair machine learning (FML) treat fairness as a constraint or a penalization term in the optimization of a ML model, which does not lead to the discovery of the complete landscape of the trade-offs among learning accuracy and fairness metrics, and does not integrate fairness in a meaningful way. Recently, we have introduced a new paradigm for FML based on Stochastic Multi-Objective Optimization (SMOO), where accuracy and fairness metrics stand as con icting objectives to be optimized simultaneously. The entire trade-offs range is de ned as the Pareto front of the SMOO problem, which can then be effciently computed using stochastic-gradient type algorithms. SMOO also allows de ning and computing new meaningful predictors for FML, a novel one being the Sharpe predictor that we introduce and explore in this paper, and which gives the highest ratio of accuracy-to-unfairness. Inspired from SMOO in finance, the Sharpe predictor for FML provides the highest prediction return (accuracy) per unit of prediction risk (unfairness).

Citation

S. Liu and L. N. Vicente, The Sharpe predictor for fairness in machine learning, ISE Technical Report 21T-019, Lehigh University.

Article

Download

View The Sharpe predictor for fairness in machine learning