Multi-fairness under class-imbalance

arxiv(2022)

引用 0|浏览30
暂无评分
摘要
Recent studies showed that datasets used in fairness-aware machine learning for multiple protected attributes (referred to as multi-discrimination hereafter) are often imbalanced. The class-imbalance problem is more severe for the often underrepresented protected group (e.g. female, non-white, etc.) in the critical minority class. Still, existing methods focus only on the overall error-discrimination trade-off, ignoring the imbalance problem, thus amplifying the prevalent bias in the minority classes. Therefore, solutions are needed to solve the combined problem of multi-discrimination and class-imbalance. To this end, we introduce a new fairness measure, Multi-Max Mistreatment (MMM), which considers both (multi-attribute) protected group and class membership of instances to measure discrimination. To solve the combined problem, we propose a boosting approach that incorporates MMM-costs in the distribution update and post-training selects the optimal trade-off among accurate, balanced, and fair solutions. The experimental results show the superiority of our approach against state-of-the-art methods in producing the best balanced performance across groups and classes and the best accuracy for the protected groups in the minority class.
更多
查看译文
关键词
Multi-discrimination, Class-imbalance, Boosting
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要