Main Content

Fairness in Binary Classification

Explore fairness in binary classification

To detect and mitigate societal bias in binary classification, you can use the fairnessMetrics, fairnessWeights, and disparateImpactRemover functions in Statistics and Machine Learning Toolbox™. First, use fairnessMetrics to evaluate the fairness of a data set or classification model using bias and group metrics. Then, use fairnessWeights to reweight observations, or use disparateImpactRemover to remove the disparate impact of a sensitive attribute.

The fairnessWeights and disparateImpactRemover functions provide preprocessing techniques that allow you to adjust your predictor data before training (or retraining) a classifier. To assess the model behavior after training, you can use the fairnessMetrics function as well as various interpretability functions. For more information, see Interpret Machine Learning Models.

Functions

fairnessMetricsBias and group metrics for a data set or classification model
reportGenerate fairness metrics report
plotPlot bar graph of fairness metric
fairnessWeightsReweight observations for fairness in binary classification
disparateImpactRemoverRemove disparate impact of sensitive attribute
transformTransform new predictor data to remove disparate impact

Topics