The Use of Responsible Artificial Intelligence Techniques in the Context of Loan Approval Processes

INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION(2023)

引用 7|浏览22
暂无评分
摘要
Despite the existing skepticism about the use of automatic systems in contexts where human knowledge and experience are considered indispensable (e.g., the granting of a mortgage, the prediction of stock prices, or the detection of cancers), our work aims to show how the use of explainability and fairness techniques can lead to the growth of a domain expert's trust and reliance on an artificial intelligence (AI) system. This article presents a system, applied to the context of loan approval processes, focusing on the two aforementioned ethical principles out of the four defined by the High-Level Expert Group on AI in the document "Ethics Guidelines for Trustworthy AI," published in April 2019, in which the key requirements that AI systems should meet to be considered trustworthy are identified. The presented case study is realized within a proprietary framework composed of several components for supporting the user throughout the management of the whole life cycle of a machine learning model. The main approaches, consisting of providing an interpretation of the model's outputs and monitoring the model's decisions to detect and react to unfair behaviors, are described in more detail to compare our system within state-of-the-art related frameworks. Finally, a novel Trust & Reliance Scale is proposed for evaluating the system, and a usability test is performed to measure the user satisfaction with the effectiveness of the developed user interface; results are obtained, respectively, by the submission of the mentioned novel scale to bank domain experts and the usability questionnaire to a heterogeneous group composed of loan officers, data scientists, and researchers.
更多
查看译文
关键词
responsible artificial intelligence techniques,loan,artificial intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要