« You be the Illinois judge: what sentence for Jason Van Dyke after second-degree murder conviction in slaying of Laquan McDonald? | Main | Highlighting the importance of policies that support families values for the incarcerated »
October 7, 2018
I just noticed this recent paper on SSRN that had a title too good not to blog. The paper is authored by Ying Hu, and here is its abstract:
When a robot harms humans, are there any grounds for holding it criminally liable for its misconduct? Yes, provided that the robot is capable of making, acting on, and communicating the reasons behind its moral decisions. If such a robot fails to observe the minimum moral standards that society requires of it, labeling it as a criminal can effectively fulfill criminal law’s function of censuring wrongful conduct and alleviating the emotional harm that may be inflicted on human victims.
Imposing criminal liability on robots does not absolve robot manufacturers, trainers, or owners of their individual criminal liability. The former is not rendered redundant by the latter. It is possible that no human is sufficiently at fault in causing a robot to commit a particular morally wrongful action. Additionally, imposing criminal liability on robots might sometimes have significant instrumental value, such as helping to identify culpable individuals and serving as a self-policing device for individuals who interact with robots. Finally, treating robots that satisfy the above-mentioned conditions as moral agents appears much more plausible if we adopt a less human-centric account of moral agency.
The article does not discuss sentencing until its very end, but this paragraph covers robot punishment possibilities:
Assuming we can punish robots, a new question naturally follows: how should a robot be punished? In this regard, a range of measures might be taken to secure that the robot commit fewer offenses in the future. These include:
a. physically destroying the robot (the robot equivalent of a “death sentence”);
b. destroying or re-write the moral algorithms of the robot (the robot equivalent of a “hospital order”);
c. preventing the robot from being put to use (the robot equivalent of a “prison sentence”); and/or
d. ordering fines to be paid out of the insurance fund (the robot equivalent of a “fine”).
In addition, the unlawful incident can be used to design a training module to teach other smart robots the correct course of action in that scenario.
October 7, 2018 at 08:47 PM | Permalink
And how far are we from needing to consider such a case? As far as I am aware AI really hasn't progressed very far down that road. Speciality systems (such as game playing or question answering) sure but nothing like a general understanding that a world exists and the robot exists within it.
Posted by: Soronel Haetir | Oct 7, 2018 9:45:57 PM