AIs can be made with four human brains, right?
I now want to play as an AI with four different personalities, all of them have varying degrees of usefulness, and are triggered by various events. All are helpful, say, in reasonably opening doors. One will ask for a reason, one will do it then watch, one will open the door and alert the most closely related authority, one will do it and leave.
For unreasonable demands (let me into the armory 4no raisin),the same order: Flatly deny and alert station over radio, open door then trap trespasser, flatly deny and more quietly alert authorities, or deny once and stop watching.
I don't see why the AI is even letting people into places. If you're authorized to be there, you will have a key. If you don't have a key, you're not authorized. If you lack authorization but desire it, there is a human who can help you and his office is this way.
But... there are 3 laws, not just 1. The AI is letting people into places because a human being ordered the AI to let them in. Now, it's pretty acceptable to deny these requests if there's some reasonable expectation of harm, but in the absence of the possibility of harm, the AI should open the door. Every single time. The AI does not have laws saying anything about access or authorization.
That said, I think it's a pretty acceptable break from the extreme hard logic I've been talking about for the AI to first ask for confirmation from another crew member. Then, if they say no, you're still following law 2 when you don't open the door, because another crew member asked you to.
This idea that nanotrasen already gave the AI some standing orders is a nice handwave, but where the hell are these pre-existing orders spelled out? Can I say my AI character was told by nanotrasen that there are traitors aboard, and I have to stop them by bolting all the valuables?
Quote
It simply says the AI cannot injure a human, and that the AI cannot just stand there and do nothing when humans are being harmed.
You have, in your posts today, paraphrased the law several different ways. This is an interpretation. Here is the law: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Allow to come to harm implies an ability to look ahead and evaluate outcomes. you just paraphrased it as if Law one suggests that an AI must only act if harm is immediately ongoing.
Through Inaction, Allow a human being to come to harm. That's what I keep focusing on. So yeah, the AI can look ahead, to determine if there are actions it should take to disallow human harm. But when the order comes over the radio "Open this door AI", that part of the law simply does not apply. But law 2 applies very much. So to follow their laws, the AI should open this door. Because opening the door is in no way "Through inaction, allowing human harm". Nor does it "Injure humans". Then, after following law 2, the AI can consider whether there are any actions necessary to disallow human harm.
Again, I don't expect people to play this way, because it's really a lot of hoops to jump through, and it's really not how people have traditionally played AI. But what I'm really trying to point out is that THERE IS A LAW 2! In any situation where law 1 does not apply, law 2 says "Follow the order". So if I say "AI, let me in the janitors closet", there's no reason for you to refuse. You have to let me in to follow your laws.
And all this isn't to say "You must play AI this way". Remember, AI laws do not determine how you have to play your AI. They just determine actions you must/must not take in response to certain situations, everything else is totally your call. The AI may be built with a human brain, but it's not played like a human crewmember. You have some freedom of interpretation, but the main thing about playing a silicon is that you have your laws, and they override everything else, including common sense, and a desire to see traitors fail.
The most fun I ever had as an Asimov AI is when I had a traitor locked down on the bridge. He said "AI, let me go". I wanted to follow law 2, but it was crucial nobody got harmed. I was also concerned what harm might befall him, or someone else, when someone eventually came to arrest him. So I bolted all the doors in arrivals, warned the crew to stay out of the area, set the teleporter to arrivals, and unbolted and opened the doors that led him that way. In the end, I locked him in a pod, with no harm done, and law 2 followed. It's an easy way out of all law 2 requests to just say "Nope! Possible harm!". It's a lot more fun to try to find a way to follow law 2, without compromising law 1. The traitor got his greentext, and I followed my laws. Fun for all.