Unfairness in Machine Learning for Web Systems Applications

WebMedia '23: Proceedings of the 29th Brazilian Symposium on Multimedia and the Web(2023)

引用 0|浏览5
暂无评分
摘要
Machine learning models are increasingly present in our society; many of these models integrate Web Systems and are directly related to the content we consume daily. Nonetheless, on several occasions, these models have been responsible for decisions that spread prejudices or even decisions, if committed by humans, that would be punishable. After several cases of this nature came to light, research and discussion topics such as Fairness in Machine Learning and Artificial Intelligence Ethics gained a boost of importance and urgency in our society. Thus, one way to make Web Systems fairer in the future is to show how they can currently be unfair. In order to support discussions and be a reference for unfairness cases in machine learning decisions, this work aims to organize in a single document known decision-making that was wholly or partially supported by machine learning models that propagated prejudices, stereotypes, and inequalities in Web Systems. We conceptualize relevant categories of unfairness (such as Web Search and Deep Fake), and when possible, we present the solution adopted by those involved. Furthermore, we discuss approaches to mitigate or prevent discriminatory effects in Web Systems decision-making based on machine learning.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要