You may have heard about the so-called Turing test, which allegedly would allow us to distinguish between a human and a robot/AI. If not, you can read about it here:
https://en.wikipedia.org/wiki/Turing_test
You also may have heard of the Three Laws of Robotics proposed by Isaac Asimov:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Once a robot/AI undeniably passes the Turing test, one could argue it was eligible to be treated as a human. But to do so would pose a paradox - to wit, humans don't have to follow these three laws (and in reality do NOT follow them), so why should they apply to a "human" robot/AI?
No comments:
Post a Comment