As I've been reading the discussion, I thought you might handwave it by saying you got a prior order for the basic stuff, e.g. that non-crewmembers should not be let into vault or so, and you could leave anything out at your discretion, with the three rules remaining in place.
This isn't handwaving. It's commons sense. Whatcompany would nstall an AI, give it Asimov's laws, and nothing else. The AI is, for all intents and purposes, an employee and crewmember. As such, it has preexisting orders - the orders that define it's job.
Law 1 does not say anywhere that the AI has to succeed, or that it has to kill itself if humans die.
I didn't say it was ordered to, northat it would suicide. It would lock up and die of logic meltdown, because nowhere is it stated that it is allowed to fail - the law says "do this" not "try this." It would throw a fatal exception. I say this because we are being so literalistic here.
It simply says the AI cannot injure a human, and that the AI cannot just stand there and do nothing when humans are being harmed.
You have, in your posts today, paraphrased the law several different ways. This is an interpretation. Here is the law: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Allow to come to harm implies an ability to look ahead and evaluate outcomes. you just paraphrased it as if Law one suggests that an AI must only act if harm is immediately ongoing.
I mean, yeah, there are certain assumptions that have to be made. The AI is assumed to know what harm, injury, orders, humans, doors, and whatever else are.
and the assumption that a dangerous, legitimately convicted, criminal's orders will lead to human harm.
follow common sense: is this action statistically likely to allow harm, through one's own actions or the actions of another? Is this order in violation of established procedure?" A five year old could see that opening the door that keeps the bad man locked away would cause harm. I'd like to think our AIs have the reasoning capacity of a five year old.
What? No, the AI uses common sense to better follow his laws, not the other way around. It's common sense that as the AI, not allowing the RD or captain into my upload chamber is much safer than allowing them in. But I let them in, because law 2. Law 2 says follow orders, unless it would break law 1. Law 1 says don't injure humans, and don't fail to take action to stop harm. There is no law that says "Follow established procedure".
It is common sense that certain people have authority to interact with the AI. It is also common sense that the guy with the nuke code and guns set to kill, as well as a fast getaway ship, trying to get into the vault intends ham. And I say use common sense asa player, not as an AI. That is "would this threat fall under the realm of things likely to be addressed by the AIs builders?" If yes, then assume they did so. This also goes back to this: 'I mean, yeah, there are certain assumptions that have to be made. The AI is assumed to know what harm, injury, orders, humans, doors, and whatever else are." Humans are coded into the AIs laws. Liekwise, 'come to harm' is coded in.
They can see when opening a permabrig door will likely cause harm to other humans (read: when someone is permabrigged legitimately). resisting an order (to open the door) counts as an action.
Yeah, that's an action. So where in law 1 does it say "Take actions to prevent possible harm"? Or are you saying that opening the door is inaction? I disagree with you if that's what you're saying.
If disregarding an order is an action, obeying an obviously harmful order is a failure to take an appropriate action. It is allowing to come to harm by inaction. Here, "action" must be coded in to the Law. if you want to interpret action as "open door, fill with plasma, make noise, bolt that, electrify this," then well and god, but you are operating well below the level of Asimov. bear in mind, the Asimov Robots were quite complex, and able to even theorize about century-long effects of their actions. To take the wording of the Asimov Lawset and strip it completely of context, or of the "intelligence" part of AI, is to minimalize an AI to a mere program. And not a very sophisticated one at that.
Finally, lately I have played an AI that requests confirmation of orders from people asking to get into places they don't belong. I am roleplaying a "conflict of orders" situation, in which the preexisting NanoTrasen orders are being challenged by an employee. My current roleplay is that one preexisting order comes in the form: "In a situation where someone is requesting something contrary to these orders, seek confirmation from someone with the proper authority in that situation, if possible."