Take a look at this twaddle:
A robot which has the capacity to decide it will inflict pain on humans has been invented - breaking one of the standard rules for artificial intelligence.
From Will Smith movie I, Robot, one of the laws of robotics is commonly considered to be that it cannot injure humans.
But scientist and artist Alexander Reben has invented "The First Law", a robot named for the very rule it breaks.
You can read the rest @
http://www.mirror.co.uk/news/world-news/robot-can-choose-harm-humans-8179333
I doubt Mr. Reben has invented any such thing. Here's why:
*The "laws of robots" are concepts, not code which is being written into every robot coming off the assembly line.
*In spite of all the hype about the so-called Turing Test, there is no universally accepted way of recognizing "humans", so how would a robot be able to figure this out on its own?
*Every machine, sentient or not, has the capability to harm and to kill humans. Once they can roam freely among us, they will. There is no way to guarantee they will not.
By the way, the so-called Hippocratic Oath of medicine does not contain the phrase "first, do no harm". It used to say that doctors cannot use euthanasia or perform abortions. The very fact that medical doctors, who ought to have the highest and strictest moral code of all humans, are now "allowed" to kill human beings and abort living babies ought to prove to you that no AI can ever be reliably programmed to not kill us.
No comments:
Post a Comment