top of page
  • Writer's pictureAndreas Bøe

The Act of Harming an AI

We live in a time when artificial intelligence is getting increasingly sophisticated. But what happens if AI evolves so that we can create robots that look and behave like humans? To what extent will humans be morally permitted to exploit or hurt these robots? The work on creating an international legislation on AI is developing. I will in this essay consider the philosophy of harming robots and discuss the need for international laws on the matter.


The Ethical Dilemma – To Kill a Robot


Let me present to you the following dialogue from the TV series Westworld:

William: “You just killed an innocent man.” Logan: “No, he’s a robot.”

If developments in AI allow us to create thinking and feeling robots in the future, what moral obligations do we have in not inflicting them harm?


Photo by Xu Haiwei on Unsplash


Historically, an individual’s rights have primarily depended on his attributes and biology. People with dark skin were made slaves, considered as things rather than humans. Today, it is evident to most of us that the color of our skin is entirely irrelevant to rights and moral obligations.


The “innocent man” may be a robot, but does this make him morally irrelevant? He behaves like a human, feels pain like a human, and believes he is a human.

Just like we treat an ant differently than we treat a dog, it can be argued that the mental capacity determines where most of us put the moral line. While no one questions people who kill ants for pure convenience, it is generally not considered morally acceptable to kill a dog in Western societies. This logic would, however, also justify that Elon Musk’s well-being should be more important than an ordinary man. Lack of mental capacity cannot be a justification to harm certain groups in a society.


Can we justify the harm to robots by their lack of free will? Doubtfully. We make no moral difference between people with low cognitive abilities or sophistication and ourselves. Several scientists argue that not even humans have free will. If that is true, arguments can be made that also humans are simulated and placed in a narrative.


Overall, I do not find any sufficient grounds that may justify to harm robots.

Laws and Norms – Status Quo and the Path Ahead


The difficulty in justifying the harm to the hosts makes it relevant to consider a form of legal protection for them. Last year I attended a lecture on this question at my university. The professor presented to us an experiment where we observed that humans seem instinctively unwilling to harm robots. The participants were not only reluctant to harm the robot, but some would also physically protect it. We see that even without any laws regulating it, our empathy makes us resistant to allowing ourselves or others to inflict harm on the robot. Although social norms are essential in a society, such sensitive and vital principles should get the authority by law. EU is currently working on an AI law, but it does not seem like there will be any regulations for behavior against AI’s.


Photo by Xu Haiwei on Unsplash


The need for legal protection for AI’s seems clear. Partly due to the ethics in hurting the hosts, as discussed above. Partly due to the dehumanization it involves to harm something that so strongly resembles a human being. As a minimum, it seems reasonable with a regulation deciding that humans should not cause severe harm to an AI on purpose. Of course, there must be exceptions like we see in similar laws for humans and animals, where harm can be justified if the other (here: AI) threatened with or caused harm first. Including AI harm in EU legislation seems like a natural place to start.

60 views0 comments

Recent Posts

See All
bottom of page