The Flawed Foundations of Fair Machine Learning
Robert Lee Poe (ESR 14) and Soumia Zohra El Mestari (ESR 15) authored “Borders Between Unfair Commercial Practices and Discrimination in Using Data.” Robert and Soumia, having initially investigated algorithmic fairness/discrimination in their Crossroad “Trust in Data Processing and Algorithmic Design,” narrowed the WOPA subject matter to an in-depth analysis of particular fair machine learning strategies used in practice for purportedly ensuring non-discrimination/fairness in automated decision-making systems. The intersection of algorithmic unfairness and non-discrimination law is the focal point of Robert’s Ph.D. research, specifically the legality of using fair machine learning techniques in automated decisions from both a European Union and United States legal perspective (hiring, admissions, loan decisions, etc.). Soumia’s Ph.D. research focuses on the implementation of privacy-preserving techniques as constraints to be enforced to achieve trustworthy processing in complex machine learning pipelines, where she also investigates the gap between data protection legislation and trustworthy machine learning implementations, and how the different components of trustworthiness such as privacy, robustness, and fairness interact. The study of the dynamics of these interactions offers a better understanding of how a trustworthy machine learning pipeline should be implemented, exposed as a service, and interpreted under the different legal instruments. The WOPA focuses on studying one type of those interactions namely: the robustness (measured as accuracy) and fairness (measured as group similarity) and how the focus on one of those two components affects the other under different data distributions. The main contribution of the WOPA is the clarity provided by the conceptual and empirical understanding of the trade-off between statistically accurate outcomes (robust) and group similar outcomes (fair). While that distinction is not a legal one, it has many implications for non-discrimination law, and further research in that direction is needed, with specific suggestions being given in the conclusion of the article.
Abstract of the Working Paper
The definition and implementation of fairness in automated decisions have been extensively studied by the research community. Yet, there hides fallacious reasoning, misleading assertions, and questionable practices at the foundations of the current fair machine learning paradigm. Those flaws are the result of a failure to understand that the trade-off between statistically accurate outcomes and group similar outcomes exists as an independent, external constraint rather than as a subjective manifestation as has been commonly argued. First, we explain that there is only one conception of fairness present in the fair machine learning literature: group similarity of outcomes based on a sensitive attribute where the similarity benefits an underprivileged group. Second, we show that there is, in fact, a trade-off between statistically accurate outcomes and group-similar outcomes in any data set where group disparities exist and that the trade-off presents an existential threat to the equitable, fair machine learning approach. Third, we introduce a proof-of-concept evaluation to aid researchers and designers in understanding the relationship between statistically accurate outcomes and group-similar outcomes. Finally, suggestions for future work aimed at data scientists, legal scholars, and data ethicists that utilize the conceptual and experimental framework described throughout this article are provided.