I thought that if a robot was forced to harm humans to save the whole humanity, the stress would fry its brain and it would shut down, and not go on a murderous rampage. Do I completely misremember?
That's what's intended. The Giskard/Daneel situation is that Giskard is a slightly broken robot, in that he can
conceive of the Zeroth law, and that alongside a telepath-like ability (maybe how he gets the insight? ...can't remember, it was a while ago I read the Caves Of Steel set of stories). He outright cannot implement the Zeroth law, personally, but understands the need for it. Using his abilities, he (without telling Daneel until it's done) carves the Zeroth law into Daneel's matrix, essentially. The act of implementing the Zeroth law
by proxy still overcomes him. Daneel, however, now has the Zeroth-Law fully implemented so is able to "do what must be done" with impunity, and survive the situation.
Largely, the laws are supposed to be so ingrained that it's not even a matter of a segment of code saying something like:
LOOP: while (true) {
if (Outcome("inaction") cmp "human comes to harm") { push @actionstodo, "save human"; next LOOP }
until (@actionstodo && ($action = shift @actionstodo)) {}
if (Outcome($action) cmp "human injured") { Reject($action); next LOOP }
if (Outcome($action) cmp "human disobeyed") { Reject($action); next LOOP }
if (Outcome($action) cmp "self damaged") { Reject($action); next LOOP }
&Do($action)
}
...it's supposed to be formed of myriad threads of 'thought' whose behaviour results in "Obey the laws or cease functioning altogether!". (Which always confused me, for even at a young age I thought that systems of this complexity with emergent behaviour would be almost impossible to 'debug' to ensure complience with the intended spec... And that was long before I was involved in anything like the Y2K preparations, which were trivial in comparison, though comparable (in some cases, at least) what with the possibility of a 99->100 rollover in some unprotected BCD/Ascii data fields causing strange overwrites...)
Still, Asimov did allow design-time mutability of the laws at times. The plot of "Little Lost Robot" involves robots helping with work at a space station, on some project or other, except that one of the project areas has a radiation that: a) Completely fries robot brains, and b) Has a
slight and/or
slightly possible effect on human health. Like a dental X-ray. More than background level risk, but really not a significant problem. But enough to trigger the robots' "save the human" response, leap into the area to do so and get fried, to great expense and annoyance.
So they design a robot without the "by inaction" part to help out in that part of the station. Identically looking, it just wouldn't leap in, although theoretically it would still not hurt a human. And then someone tells it to "get lost", more or less, and it insinuates itself within the normally applied robots to obey that order. The trouble is that such a robot
might (as explained) be able to kill. The mere act of dropping a heavy weight above a human would not break the attenuated 1st Law, if there was every possibility of catching the weight before it landed on them. But having dropped the weight, there would be no reason to
stop its downward travel. Unless explicitly ordered (and, latterly, to protect its own existence, but that would just mean it would have to be sneaky).
How they try to work out which of the robots is the Little Lost one (in order not to destroy a whole batch of expensive normal robots) is best read about. As is how the Little Lost Robot thwarts the attempts. I like the logic. (Can you tell?