When I was thinking about how to describe how an advanced superintelligent artificial intelligence controls the whole world in the future, I thought that laws and rules needed to be taken. I came to think of the future writer Isaac Asimov.
The best known set of laws are Isaac Asimov‘s ”Three Laws of Robotics”. These were introduced in his 1942 short story ”Runaround”, although they were foreshadowed in a few earlier stories. The Three Laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Near the end of his book Foundation and Earth, a zeroth law was introduced:
- 0. A robot may not injure humanity, or, by inaction, allow humanity to come to harm.
And it this 0-law that caught my attention. In my book, The Good Troll, I let the robots be the ones who determined the law. And then I wrote it this way …
No intelligent life, biological or mechanical, may allow the Earth and its inhabitants to come to harm due to one’s own or another’s actions, directly or indirectly.
I think this only one law covers everything. No more is needed.
I understand if you scratch your head and start to think of all the laws that are found in all the nations’ thick law books. Not to mention all religious texts.
But think about this ”One Law” for a while.
I will return with more thoughts on this in part 2.