• Welcome to Religious Forums, a friendly forum to discuss all religions in a friendly surrounding.

    Your voice is missing! You will need to register to get access to the following site features:
    • Reply to discussions and create your own threads.
    • Our modern chat room. No add-ons or extensions required, just login and start chatting!
    • Access to private conversations with other members.

    We hope to see you as a part of our community soon!

Killing of a robot

Sirona

Hindu Wannabe
I wondered whether I should put this thread under Philosophy, as I am considering to write something with a background in transhumanism. I know this board is not primarily about writing but I‘ve always appreciated this forum for its astounding diversity of viewpoints. I am collecting arguments and opinions for the question: Would the hypothetical killing of a (sufficiently designed and developed) robot/android be as reprehensible or less reprehensible than the killing of a human? More specifically, the killing of a female robot in comparison to the killing of an actual woman? Please give some reference (also to pop-culture) if you can. The only works dealing with the topic that I am familiar with are Blade Runner and that Star Trek The Next Generation episode in which Data‘s legal status is debated in court.
 

SalixIncendium

अग्निविलोवनन्दः
Staff member
Premium Member
If you're interested in this stuff, you may want to watch The Animatrix.
 

Twilight Hue

Twilight, not bright nor dark, good nor bad.
In games I try to protect all innocent electric lives who may have a electric wife and children back home.
 

Nimos

Well-Known Member
I wondered whether I should put this thread under Philosophy, as I am considering to write something with a background in transhumanism. I know this board is not primarily about writing but I‘ve always appreciated this forum for its astounding diversity of viewpoints. I am collecting arguments and opinions for the question: Would the hypothetical killing of a (sufficiently designed and developed) robot/android be as reprehensible or less reprehensible than the killing of a human? More specifically, the killing of a female robot in comparison to the killing of an actual woman? Please give some reference (also to pop-culture) if you can. The only works dealing with the topic that I am familiar with are Blade Runner and that Star Trek The Next Generation episode in which Data‘s legal status is debated in court.
The netflix series Better than us also deals with this topic.

It's a really good question I think.

it depends on how human like they are, if they can make us care and feel connected to them beyond that of a random device then it will no doubt cause issues.

But if you remember those Tamagotchis that was popular in the mid 1990s I think?

banned-in-schools-and-feared-by-psychologists-1550091149.jpg


Where people had to look after one of these, like feed it and so forth. Some started to really care for them. So it seems like we can quickly get attached to anything, so if a Tamagotchi looked and behaved like a human, I think it would be a lot worse to be honest, in regards to how emotional people would get attached to them.

Now I never tried one of these my self, so im not sure that if it "died" that it would be gone for ever?
 

bobhikes

Nondetermined
Premium Member
I wondered whether I should put this thread under Philosophy, as I am considering to write something with a background in transhumanism. I know this board is not primarily about writing but I‘ve always appreciated this forum for its astounding diversity of viewpoints. I am collecting arguments and opinions for the question: Would the hypothetical killing of a (sufficiently designed and developed) robot/android be as reprehensible or less reprehensible than the killing of a human? More specifically, the killing of a female robot in comparison to the killing of an actual woman? Please give some reference (also to pop-culture) if you can. The only works dealing with the topic that I am familiar with are Blade Runner and that Star Trek The Next Generation episode in which Data‘s legal status is debated in court.

There's several anime's out there that tackle this issue. Plastic Memories does a pretty good job of it. The robots AI breaks down after so many years and can start killing people so they need to be put down before this happens. Everyone that gets a robot must sign an agreement to allow this but of course over the years they get attached to the robots and some of the robots refuse to let it happen and then hit the breakdown point before they can be retrieved.

Its not a kill or be killed anime, the team tries to first talk with the owner's and the AI robot's and settle disputes without violence.
 

Saint Frankenstein

Wanderer From Afar
Premium Member
Robots aren't self-aware or sentient, so they have no rights and we have no duties towards them. I highly doubt they ever will be as I think transhumanism and this push to create self-aware AI is based on false assumptions.
 

Erebus

Well-Known Member
I've only watched the first season but Westworld is worth a look.

As for the question itself, I think you have to look at it based on a few possibilities regarding consciousness. For the sake of simplicity, I'll assume we're talking about a robot that looks and behaves exactly like a human. I'll also leave out the question of the degree to which killing a human is immoral (e.g. self-defence vs death penalty vs murder and so on) and whether killing non-human animals is as immoral as killing a human. This topic is complicated enough already.

As I see it, the three main scenarios are:

1. We're certain the robot has the same consciousness as a human.
2. We're uncertain as to whether the robot has the same consciousness as a human.
3. We're certain the robot is not conscious.

In the first instance, I would argue that killing the robot is equally as immoral as killing a human. You're inflicting the same amount of harm in either case with full knowledge of what you're doing.

In the second instance I would argue that it's less immoral to kill the robot but is still something to avoid. Killing a human in this case implies certain knowledge that you're inflicting harm and are basing your decision on that knowledge. As the consciousness of the robot is unknown, you may or may not be inflicting harm and can't base your decision on certain knowledge. At the same time though, you're taking a gamble on whether or not your actions are harmful.

In the last instance, I'd say it's certainly less immoral to kill the robot. Your actions aren't harming a conscious being and you're fully aware of that. I would however be very wary of the mental state of somebody capable of physically killing something that's outwardly identical to a human.

The problem with all of this is that if something behaves and reacts precisely like a human, I would say that we're pretty much looking at scenario No. 2 by default. In a sci-fi story you can get around this by having technology capable of determining whether or not something possesses consciousness if that's how you want to write it. In reality, it's not so easy.
 

Sirona

Hindu Wannabe
Thanks. Your ideas are much appreciated. I am already familiar with some of the works you suggested.

I was trying to figure out what I was looking for specifically. I found this:

https://www.smh.com.au/lifestyle/th...d-we-should-be-concerned-20170804-gxpsl3.html

Apparently, I wasn't up to date with regard to what is possible today.

So let me rephrase: Would you consider the destruction of a (more technically advanced) robot of that kind a reprehensible act?

Bonus question: Has anyone seen the 1980's trash movie Cherry 2000 by chance?
 

MNoBody

Well-Known Member
a really good mimic is still at the end of the day a mimic.
like an embezzler or any other such deceptive players, they mimic behavior to form attachments and sentiments in their target, to gain sympathy and affection, but who cares about all the nice things such a being does to gain confidence, especially if it is merely programmed to be that way, they played their target falsely. should those things count when deciding what to do about such predators?
 

ecco

Veteran Member
We tend to think of AI robots as having their "brain" in something that looks like a human head. However, more and more "thinking power" is actually being done in the cloud (compare a Chromebook to a regular laptop).

If an AI robot's "brain" (memories, thoughts, et al) are in the cloud, physically destroying any one specific AI/Bot wouldn't really be killing it. If the physical entity gets destroyed, the brain could easily be transplanted into any other physical entity - even a car.
 
Top