L-q regularization for fair artificial intelligence robust to covariate shift

STATISTICAL ANALYSIS AND DATA MINING(2023)

引用 0|浏览12
暂无评分
摘要
It is well recognized that historical biases exist in training data against a certain sensitive group (e.g., non-White, women) which are socially unacceptable, and these unfair biases are inherited in trained artificial intelligence (AI) models. Various learning algorithms have been proposed to remove or alleviate unfair biases in trained AI models. In this paper, we consider another type of bias in training data so-called covariate shift in view of fair AI. Here, covariate shift means that training data do not represent the population of interest well. Covariate shift occurs when special sampling designs (e.g., stratified sampling) are used when collecting training data, or the population where training data are collected is different from the population of interest. When covariate shift exists, fair AI models on training data may not be fair in test data. To ensure fairness on test data, we develop computationally efficient learning algorithms robust to covariate shifts. In particular, we propose a robust fairness constraint based on the L-q norm which is a generic algorithm to be applied to various fairness AI problems without much hampering. By analyzing multiple benchmark datasets, we show that our proposed robust fairness AI algorithm improves existing fair AI algorithms much in terms of the fairness-accuracy tradeoff to covariate shift and has significant computational advantages compared to other robust fair AI algorithms.
更多
查看译文
关键词
covariate shift,distributional robustness,fair AI,robust fairness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要