Cyrus Cousins

Axiomatically Justified and Statistically Sound Fair Machine Learning

An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning

Cyrus Cousins

Abstract

We address an inherent difficulty in welfare-theoretic fair machine learning by proposing an equivalently axiomatically-justified alternative and studying the resulting computational and statistical learning questions. Welfare metrics quantify overall wellbeing across a population of one or more groups, and welfare-based objectives and constraints have recently been proposed to incentivize fair machine learning methods to produce satisfactory solutions that consider the diverse needs of multiple groups. Unfortunately, many machine-learning problems are more naturally cast as loss minimization tasks, rather than utility maximization, which complicates direct application of welfare-centric methods to fair machine learning. In this work, we define a complementary measure, termed malfare, measuring overall societal harm (rather than wellbeing), with axiomatic justification via the standard axioms of cardinal welfare. We then cast fair machine learning as malfare minimization over the risk values (expected losses) of each group. Surprisingly, the axioms of cardinal welfare (malfare) dictate that this is not equivalent to simply defining utility as negative loss. Building upon these concepts, we define fair-PAC learning, where a fair-PAC learner is an algorithm that learns an ε-δ malfare-optimal model with bounded sample complexity, for any data distribution, and for any (axiomatically justified) malfare concept. Finally, we show broad conditions under which, with appropriate modifications, standard PAC-learners may be converted to fair-PAC learners. This places fair-PAC learning on firm theoretical ground, as it yields statistical and computational efficiency guarantees for many well-studied machine-learning models, and is also practically relevant, as it democratizes fair machine learning by providing concrete training algorithms and rigorous generalization guarantees for these models.


Keywords

Fair Machine Learning ♦ Cardinal Welfare Theory ♣ PAC-Learning ♥ Computational Learning Theory ♠ Statistical Learning Theory


Read the full paper on arXiv

Read the (deliciously concise) NeurIPS 2021 conference paper


NeurIPS 2021



Presentation Recording


Slide Deck


EC 2021 Poster Session



Slide Deck


Uncertainty and the Social Planner’s Problem: Why Sample Complexity Matters

Cyrus Cousins

Abstract

Welfare measures overall utility across a population, whereas malfare measures overall disutility, and the social planner’s problem can be cast either as maximizing the former or minimizing the latter. We show novel bounds on the expectations and tail probabilities of estimators of welfare, malfare, and regret of per-group (dis)utility values, where estimates are made from a finite sample drawn from each group. In particular, we consider estimating these quantities for individual functions (e.g., allocations or classifiers) with standard probabilistic bounds, and optimizing and bounding generalization error over hypothesis classes (i.e., we quantify overfitting) using Rademacher averages. We then study algorithmic fairness through the lens of sample complexity, finding that because marginalized or minority groups are often understudied, and fewer data are therefore available, the social planner is more likely to overfit to these groups, thus even models that seem fair in training can be systematically biased against such groups. We argue that this effect can be mitigated by ensuring sufficient sample sizes for each group, and our sample complexity analysis characterizes these sample sizes. Motivated by these conclusions, we present progressive sampling algorithms to efficiently optimize various fairness objectives.


Keywords

Fair Machine Learning ♦ Cardinal Welfare Theory ♣ Minimax Fair Learning ♥ Multi-Group Agnostic PAC Learning ♠ Statistical Learning Theory


Read the extended paper

Read the FAccT 2022 conference paper


FAccT 2022 Poster


Videos

Watch the short video and presentation video


Fair E3: Efficient Welfare-Centric Fair Reinforcement Learning

Cyrus Cousins, Kavosh Asadi, & Michael L. Littman

Extended Abstract


Keywords

Fair Machine Learning ♦ Cardinal Welfare Theory ♣ Theoretical Reinforcement Learning ♥ Exploration ♠ Exploitation


RLDM 2022 Poster