Three laws of robotics:
1.) A robot may not harm or through inaction cause the harm of a human being.
2.) A robot may preserve its own existence.
3.) A robot may observe the second law except where it conflicts with the first.
Huh. That is a interesting take to Asimov's laws. I am curious, why did you change them?
Also I disagree with your "inevitable results" simply because by the time they are sophisticated enough to take over they will be sophisticated enough to recognize mental harm. I would think it would end up much like how it did in Asimov's world, with robots using subtle social engineering to control us.
Because I misremembered them, honestly. Also to make a point about conflict of laws and unintended consequences. The change doesn't matter really: same result.
Asimov's Laws of Robotics 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Because here's how a machine may interpret it: Given: human beings are slowly destroying themselves and their planet which will destroy them....
A.) A, if not the only, means of protecting humans would be to control them and prevent their self harm.
B.) Not controlling them to prevent their self harm is "inaction," because it allows such harm.
C.) The humans may resist; possibly ending the existence of several robots, but due to conflict with the first law, the loss of robots is acceptable.
D.) Humans may resist, giving orders not to control them, but because not controlling humans would conflict with the first law via allowing them to self harm through inaction of the robots, it is acceptable to ignore humanity's orders not to control it.
Thus, in order to comply with the first law and avoid breaking it due to inaction, robots must control humans for their own good.
Conflict of laws, unintended consequences. Problems.
As for "Mental Harm" good luck preventing all of that.... Any of it really.
O I never said there was a "Good" answer....