The social production of technological autonomy COMMNET

Human–Computer Interaction(2022)

引用 2|浏览14
暂无评分
摘要
The discussion of potential dangers, brought about by intelligent machines, can be traced back at least to Wiener (1960). However, it has never been more needed than it is now. Current technological developments make these dangers increasingly concrete and real, and so the paper by Hancock (this volume) is particularly timely. By systematically presenting and analyzing some of the key issues, problems, and approaches in the current discourse on autonomous agents, the paper does a valuable job in further engaging the HCI research community in the discourse. A key strength of the paper, in my view, is that it is apparently designed to invite comments, disagreements, and alternative perspectives. In this commentary, I reflect on a central theme in Hancock’s analysis, namely, the emergence of agents’ own intentions as a (presumably inevitable) result of the ongoing progress in artificial intelligence (AI). This is one of the most fascinating issues in the entire field of AI. The theme has not only become an object of academic debates, but also made a massive impact on popular culture (as exemplified, for instance, by movies and TV series, such as Blade Runner or Westworld). The question at the heart of the issue is: How and why can an AI system be transformed from a piece of human-controlled technology with constrained autonomy (limited to deciding how to perform the task assigned to it) to a fully autonomous agent, acting on its own intentions? Current attempts to envision a future, in which fully autonomous AI systems become a reality, often gloss over the specific causes and mechanisms of such a transformation. In some cases, e.g., in “slave uprising” scenarios, is it implied that the transformation may happen because designers, when trying to create systems that are as similar to humans as possible, fall victims, often literally, to their own success. At the most basic level, the underlying assumption appears to be that increasingly more advanced cognitive capabilities of a technology – even if they are only used when acting on someone or something else’s intentions – eventually lead to the development of self-awareness, which, in turn, gives rise to full autonomy. Hancock outlines a particular perspective on how agents’ full autonomy can be expected to develop. According to this perspective, dubbed “isles of autonomy,” the path to full autonomy starts with the emergence of isolated technologies having constrained autonomy, such as autonomous vehicles or autopilots. Each of these isles, when young and unstable, is initially surrounded and supported by human attendants, who take care of them (similarly to taking care of “prematurely born neonates”). Over time, the isles grow and eventually merge into a fully autonomous system. This perspective, even if rather metaphorical, potentially provides useful guidance for thinking about autonomous agents. However, the perspective does not clarify why and how exactly a constrained autonomy transforms into a full autonomy over the course of the described development. Arguably, the entire development may, in principle, take place without ever progressing to full autonomy. First, when an isle expands and the technology in question becomes less dependent on human support and maintenance, the autonomy of that technology does not necessarily become less constrained, because its tasks may still be assigned to it by someone or something else. For instance,
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要