Cunneen, Martin and Mullins, Martin and Murphy, Finbarr (2019) Autonomous Vehicles and Embedded Artificial Intelligence: The Challenges of Framing Machine Driving Decisions. Applied Artificial Intelligence, 33 (8). pp. 706-731. ISSN 0883-9514
Autonomous Vehicles and Embedded Artificial Intelligence The Challenges of Framing Machine Driving Decisions.pdf - Published Version
Download (2MB)
Abstract
With the advent of autonomous vehicles society will need to confront a new set of risks which, for the first time, includes the ability of socially embedded forms of artificial intelligence to make complex risk mitigation decisions: decisions that will ultimately engender tangible life and death consequences. Since AI decisionality is inherently different to human decision-making processes, questions are therefore raised regarding how AI weighs decisions, how we are to mediate these decisions, and what such decisions mean in relation to others. Therefore, society, policy, and end-users, need to fully understand such differences. While AI decisions can be contextualised to specific meanings, significant challenges remain in terms of the technology of AI decisionality, the conceptualisation of AI decisions, and the extent to which various actors understand them. This is particularly acute in terms of analysing the benefits and risks of AI decisions. Due to the potential safety benefits, autonomous vehicles are often presented as significant risk mitigation technologies. There is also a need to understand the potential new risks which autonomous vehicle driving decisions may present. Such new risks are framed as decisional limitations in that artificial driving intelligence will lack certain decisional capacities. This is most evident in the inability to annotate and categorise the driving environment in terms of human values and moral understanding. In both cases there is a need to scrutinise how autonomous vehicle decisional capacity is conceptually framed and how this, in turn, impacts a wider grasp of the technology in terms of risks and benefits. This paper interrogates the significant shortcomings in the current framing of the debate, both in terms of safety discussions and in consideration of AI as a moral actor, and offers a number of ways forward.
Item Type: | Article |
---|---|
Subjects: | Pustakas > Computer Science |
Depositing User: | Unnamed user with email support@pustakas.com |
Date Deposited: | 19 Jun 2023 10:25 |
Last Modified: | 08 Dec 2023 05:03 |
URI: | http://archive.pcbmb.org/id/eprint/817 |