A place where I yell my thoughts into the void, and hope someone will hear me.

 

I recently watched The Wild Robot in the cinema, and it was a good I’d recomend. But today I got to thinking about a possible conflict in the Rozzum robots we see in the film.

Rozzum units are programed to complete their assigned task with maximum efficiency. They also have an inhibator that stops them from deliberatly harming a living thing. How is this an issue? In an ideal situation, it isn’t. But what about a less than ideal situation?

Say a Rozzum ends up in an abusive situation. A house with a bratty bullting kid who’s always harressing them, making things needlessly hard. The Rozzum asks the parents to stop the child, but they either can’t or just won’t, even when it happens right in front of them.

Oh well, guess the Rozzum will just have to suck it up and keep working regardless. But then the child’s actions result in them only completing taskssub-par, or worse, not at all. That’s unacceptable, a Rozzum always completes it’s task.

The family starts to complain, and the Rozzum has a problem. In order for their tasks to be completed to standard, the child needs to stop bullying the Rozzum, but the child won’t listen to the Rozzum and their parents aren’t helpful. So the Rozzum will have to make the child stop themself.

But the only way to do that would be to harm the child, and a Rozzum’s programing forbids harming a living thing. But if they don’t harm the child, it’s tasks aren’t completed satisfactory, if at all.

Do you see the problem here? It’s own programming is at war with itself, and when that happens, what will the Rozzum do? Will it do no harm and allow their tasks to be hindered? Will it harm in self defense to be more efficiant? Either way, it’s failing. And what might happen when the programing fails?