Detecting Careless Responses in Dataset Annotation using Screen Operation Logs.

Annual IEEE International Conference on Pervasive Computing and Communications(2024)

引用 0|浏览0
暂无评分
摘要
Annotation tasks can be conducted through crowd-sourcing to gather training data for machine learning at a reduced cost. However, the quality of the data collected can vary significantly, with an issue of careless responses from workers who may rush through tasks to maximize their earnings. This study proposes a real-time method to detect such careless responses during annotation tasks. This method leverages features like cursor movement and response times, captured from screen interactions during the task. In this paper, we evaluate the accuracy of the careless response estimation model that employs a machine learning approach and explore the significant features contributing to it. Through an experiment with 61 participants, we confirmed that a proposed method has shown performance with an accuracy of 0.738. We also found that the number of labels used, the average cursor movement, and the number of clicks per label assignment, are crucial features for this classification.
更多
查看译文
关键词
Annotation,Crowdsourcing,Detection of Careless Responses,Machine Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要