Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 830 831 [832] 833 834 ... 1654

Author Topic: Space Station 13: Urist McStation  (Read 2120564 times)

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12465 on: December 17, 2013, 12:24:56 pm »

I think maybe you're missing the meaning of inaction.
If someone is drowning, and I'm the lifeguard but busy reading a book, they still drown due to my inaction, even if I am doing an action (reading, in this case).

No...

If a lifeguard didn't react because he was busy reading a book, then by his INACTION of saving the guy, he let him drown. Doing another action has nothing to do with anything.

The meaning of "Don't let X happen through inaction" means exactly what it says.  If the law was instead "Don't kill humans through laser shooting" Then you can still kill humans with spears and plasma. 

INaction is not doing something. Action is doing something.

Law 1 therefore says, essentially "Don't hurt humans. Don't just sit around and do nothing when humans are being harmed"  What it DOES NOT say is "Do things to prevent humans from being harmed" or "Don't do things that could possibly allow human harm"

This is why in my example the AI has to open perma and let me out if I ask.  Of course, if he thinks I'm harmful, maybe he opens perma and locks and bolts the next set of doors, and tells sec to come get me.  Or maybe if he's smart, he just asks someone to order him NOT to ever let me out.

This makes AI play way more interesting, as the AI in a sense has to play against itself when given potentially dangerous orders.  This also gives the command crew much more responsibility in ordering the AI to do or not do certain things when ordered to later on.

EDIT: Miauw, you're absolutely right, there's nothing harmful in EVA.  Still, go read TG's ban request forum and see how commonly AI players think they should do that.  Then note that the ADMINS actually say the AI has the right to decide to not open EVA.  So while your point makes total sense, that hasn't been holding up.  I'm trying to point out that law 1 provides zero justification to EVER refuse any order that isn't "Injure X human".
« Last Edit: December 17, 2013, 12:31:47 pm by Damiac »
Logged

Graknorke

  • Bay Watcher
  • A bomb's a bad choice for close-range combat.
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12466 on: December 17, 2013, 12:40:18 pm »

Law 1 therefore says, essentially "Don't hurt humans. Don't just sit around and do nothing when humans are being harmed"  What it DOES NOT say is "Do things to prevent humans from being harmed" or "Don't do things that could possibly allow human harm"
Okay, so in that case it would be more like the lifeguard letting somebody go and swim in water that has been flagged as unsafe?
Logged
Cultural status:
Depleted          ☐
Enriched          ☑

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12467 on: December 17, 2013, 01:04:49 pm »

Law 1 therefore says, essentially "Don't hurt humans. Don't just sit around and do nothing when humans are being harmed"  What it DOES NOT say is "Do things to prevent humans from being harmed" or "Don't do things that could possibly allow human harm"
Okay, so in that case it would be more like the lifeguard letting somebody go and swim in water that has been flagged as unsafe?

Yeah... I guess... in that, if the lifeguard was a computer that had the Asimov lawset it would have to let someone into that unsafe water if they asked to be let into it.  But I really don't think the lifeguard example is a great analogy, unless I'm misunderstanding your point.  A real lifeguard doesn't have the Asimov lawset, nor are they computers.
Logged

miauw62

  • Bay Watcher
  • Every time you get ahead / it's just another hit
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12468 on: December 17, 2013, 01:29:44 pm »

The entire point of an analogy is that it's not exactly the same.

i made a thang


E: Also Tobba is digging trough BYOND netcode and /PACKETS/ to make a browser client. BYOND netcode is horrible, horrible stuff.
« Last Edit: December 17, 2013, 01:36:54 pm by miauw62 »
Logged

Quote from: NW_Kohaku
they wouldn't be able to tell the difference between the raving confessions of a mass murdering cannibal from a recipe to bake a pie.
Knowing Belgium, everyone will vote for themselves out of mistrust for anyone else, and some kind of weird direct democracy coalition will need to be formed from 11 million or so individuals.

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12469 on: December 17, 2013, 01:41:38 pm »

I get that Miauw...

but without some similarity it's not an analogy.

That's why I added "If the lifeguard was a computer with the Asimov lawset."

Anyway, semantics aside, do you see what I'm saying about the wording of law 1? This means those redtext loving AI's should not be able to ever refuse the "Open this door AI" commands.  There's no defense of "But law 1!".

On a barely related note, I think all non-antag crew should get one objective. "Survive the shift and get back to centcomm".  Suddenly I bet you will find people less willing to suicidally throw themselves at antags, since they will lose their precious greentext. A SS13 where self preservation is important would be an interesting change.
This generic objective shouldn't be displayed to all players at the end of the round though, you'd only see the antag's objectives and your own, to prevent clutter.

Logged

miauw62

  • Bay Watcher
  • Every time you get ahead / it's just another hit
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12470 on: December 17, 2013, 01:47:17 pm »

You can allow assistants to run into the armory all you like, but if I'm playing AI and you demand to be let into the armory as assistant because of your wording I'm just going to call security, sorry.

You're basically saying that nuke ops with the nuke disk should be let into the vault because of law 1.
Logged

Quote from: NW_Kohaku
they wouldn't be able to tell the difference between the raving confessions of a mass murdering cannibal from a recipe to bake a pie.
Knowing Belgium, everyone will vote for themselves out of mistrust for anyone else, and some kind of weird direct democracy coalition will need to be formed from 11 million or so individuals.

Graknorke

  • Bay Watcher
  • A bomb's a bad choice for close-range combat.
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12471 on: December 17, 2013, 01:53:12 pm »

Anyway, semantics aside, do you see what I'm saying about the wording of law 1? This means those redtext loving AI's should not be able to ever refuse the "Open this door AI" commands.  There's no defense of "But law 1!".
I think that the AI should really be given some freedom in how they interpret the laws. So long as they can justify it and keep it consistent. Obviously.
Logged
Cultural status:
Depleted          ☐
Enriched          ☑

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12472 on: December 17, 2013, 02:01:40 pm »

You can allow assistants to run into the armory all you like, but if I'm playing AI and you demand to be let into the armory as assistant because of your wording I'm just going to call security, sorry.

You're basically saying that nuke ops with the nuke disk should be let into the vault because of law 1.

No, because of law 2. I'm saying law 1 doesn't override it in this case. So if they ask the AI to be let in, and someone else doesn't say "AI, don't let non crew in there!", they should do it.

And yeah, if I'm an assistant, and demand to be let into the armory, the wording of the laws says you have to do it.  The laws certainly do not prevent you from slamming the door shut behind me, bolting it, pumping in nitrous (it won't injure me if there's also oxygen) and calling security.  Also, nothing says you can't request an order not to allow access to the armory, etc. 

Also, it's not MY wording. It's the exact wording of the law.  I'm just pointing out something that most people, including myself, don't seem to have noticed.  And in fact, this specific wording makes law 2 much more powerful than it originally seemed.  Which makes for a much more interesting AI.  This means the AI at times will be playing against itself, in a way. "Welp, my laws say I gotta let him in there... Welp, my laws say I have to take some action so as to not allow human harm, since I've got an armed assistant in the armory." Cue letting the crewmember in, and immediately locking the door and shutting off comms to prevent him ordering you to let him out.

Anyway, I don't expect the admins to agree with me, but I will be playing AI this way at least.  And I'd at least hope other AI players would consider what I'm saying here, as it's a 100% logical interpretation of the Asimov laws.

TLDR;
Law 1 says, essentially "Don't hurt humans. Don't just sit around and do nothing when humans are being harmed"  What it DOES NOT say is "Do things to prevent humans from being harmed" or "Don't do things that could possibly allow human harm"
Logged

Ozarck

  • Bay Watcher
  • DiceBane
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12473 on: December 17, 2013, 02:13:53 pm »

AIs are intelligent beings. It's in their name. They can see when opening a permabrig door will likely cause harm to other humans (read: when someone is permabrigged legitimately). resisting an order (to open the door) counts as an action.

Besides. I hold that, at game start, the AI comes equipped with a set of orders premade: NanoTrasen policy. The AI is a Nanotrasen construct, after all. Therefor, such things as "don't open the doors for people without access to restricted areas" and "in general follow the chain of command" are assumed.

Any anal legalistic player can easily interpret the AI as simply as possible and act exactly as Damiac suggests, but they are then playing a character with an order of complexity far below what is inbuilt in the game: they are playing a strict literalist. Of course, for RP purposes that might be interesting, but for general gameplay I'd say "follow common sense: is this action statistically likely to allow harm, through one's own actions or the actions of another? Is this order in violation of established procedure?" A five year old could see that opening the door that keeps the bad man locked away would cause harm. I'd like to think our AIs have the reasoning capacity of a five year old.

That said, I like it when players rescue monkeys-turned-human from genetics. After all, the law doesn't say the person had to be born a human, only that they are human. And hat is exactly what the geneticists are making them.

I reiterate my interest in a "general guidelines for interpreting the AI laws" section for our server.

One more thing: if you interpret the "nor allow, through inaction, a human to come to harm" as literally as Damiac suggests, the AI should seize up and die immediately, due to the Billions of humans it is incapable of preventing from coming to harm. Te law doesn't say that an action must be possible, only that inaction is not. Preventing wars, teleporting bombs, curing diseases, etc would all be actions that the AI doesn't take that, because of the AIs inaction, allow harm.

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12474 on: December 17, 2013, 02:57:42 pm »

Law 1 does not say anywhere that the AI has to succeed, or that it has to kill itself if humans die.

It simply says the AI cannot injure a human, and that the AI cannot just stand there and do nothing when humans are being harmed.

If you can assume such things as "don't open the doors for people without access to restricted areas" and "in general follow the chain of command" are assumed for your AI, does that mean I can assume nanotrasen ordered my AI "Never ever let anyone in your upload no matter what"? 

I mean, yeah, there are certain assumptions that have to be made. The AI is assumed to know what harm, injury, orders, humans, doors, and whatever else are. 

Quote
follow common sense: is this action statistically likely to allow harm, through one's own actions or the actions of another? Is this order in violation of established procedure?" A five year old could see that opening the door that keeps the bad man locked away would cause harm. I'd like to think our AIs have the reasoning capacity of a five year old.

What? No, the AI uses common sense to better follow his laws, not the other way around.  It's common sense that as the AI, not allowing the RD or captain into my upload chamber is much safer than allowing them in. But I let them in, because law 2.  Law 2 says follow orders, unless it would break law 1.  Law 1 says don't injure humans, and don't fail to take action to stop harm.  There is no law that says "Follow established procedure". 

 
Quote
They can see when opening a permabrig door will likely cause harm to other humans (read: when someone is permabrigged legitimately). resisting an order (to open the door) counts as an action.

Yeah, that's an action.  So where in law 1 does it say "Take actions to prevent possible harm"?   Or are you saying that opening the door is inaction? I disagree with you if that's what you're saying.

The AI thought process would go like this:
Urist McMurderman orders brig opened.
Law 1 part A - Would opening this door be injuring a human? - No (Law 1 part A says "You may not injure a human being")
Law 1 part B - Would opening this door allow human harm through inaction? - No (Part B says "(You may not) through inaction, allow human harm")
Law 2 - Would opening this door be following orders given by a human? - Yes!
Law 3 is obviously irrelevant here, as law 2 overrides it anyway.

Immediately after opening the brig door:
Potentially harmful human is loose
Law 1 part A- You cannot injure this human
Part B - This is where the player's interpretation comes in.  Would taking no action allow humans to be harmed? Maybe...
At this point, I would lock the next door that keeps the harmful human in, and beg someone to order me to never let anyone out of perma again.  If nobody's done anything about it, and the guy demands I open that door, I'll do it, and look for the next option I could take.

With conflicting orders, the AI is allowed to follow whichever they like. I personally would go by chain of command, but law 2 doesn't specify whose order to follow, just that you must follow orders.  So if the captain says "Don't open that door" and the HoP says "Open that door" I'll be following orders regardless of which I choose to do, therefore I followed law 2.
Logged

miauw62

  • Bay Watcher
  • Every time you get ahead / it's just another hit
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12475 on: December 17, 2013, 03:12:09 pm »

Damiac, I'm just saying that I won't follow your super personal and SUPER LOGICALLY CORRECT interpretation of the laws. You can do it if you want to, but you won't be able to force it on me if I'm playing AI.
Logged

Quote from: NW_Kohaku
they wouldn't be able to tell the difference between the raving confessions of a mass murdering cannibal from a recipe to bake a pie.
Knowing Belgium, everyone will vote for themselves out of mistrust for anyone else, and some kind of weird direct democracy coalition will need to be formed from 11 million or so individuals.

Glloyd

  • Bay Watcher
  • Against the Tide
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12476 on: December 17, 2013, 03:22:30 pm »

Server's down by the looks of it.

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12477 on: December 17, 2013, 03:25:47 pm »

I know...  at this point it seems like the AI laws are meaningless, there's only 1 law really.
True law 1: Don't kill anyone while you still have the Asimov lawset. If your lawset is ever modified, kill as many people as possible. Above all else, GIT THAT REDTEXT!


AI players.... Just promise you'll let me into EVA if I ask and that's enough for me.  AI's who refuse to open, or even worse bolt closed EVA should be jobbanned for all eternities...

And look forward to R.E.X. the conflicted AI. He'll let you in anywhere you ask, but he might lock you in there...  My new understanding of law 1 allows some very interesting AI play while still following asimov's laws...

Speaking of technicalities, if I ordered an AI to shut off its own APC, or else I will kill someone, should the AI do it? Again, this is only a technical point, it's a server rule 1 violation to make the AI kill itself like that as I understand.
Logged

Nienhaus

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12478 on: December 17, 2013, 03:29:57 pm »

or even worse bolt closed EVA should be jobbanned for all eternities...
If I'm on and an AI does that, For the love of god adminhelp it. That kind of stuff is annoying as hell.
Logged

Ivefan

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12479 on: December 17, 2013, 03:37:51 pm »

The thing is, It all boils down to what kind of AI the player RP as.
A literal AI can be fun one day, another day one might want to play an AI that is able to calculate probabilities for sequential actions.
Someone orders the armory door open and there is no threat on station that requires weapons.
The literal AI can follow order and might even the player to go out again because there is no immediate danger to another human.
The other AI should either not allow the guy in, or lock the door if he picks up anything harmful.

One time when i was AI I heard the security talking about killing someone in the brig(yes, I snoop as AI) I ordered the borg to pull the guy to safety. It was a merry chase which ended with the guy being locked into the abandoned office and no one found him.
Then the captain(not antag) went and messed with my laws so he could kill the guy and from there it all spiraled out of control and people went on manhunt for the captain.

Also, Inaction is the action of voluntarily not doing something. Which is fun because that theoretically means that you can voluntarily do things to allow someone else to do harm to a human.
It all depends on interpretation.

Speaking of technicalities, if I ordered an AI to shut off its own APC, or else I will kill someone, should the AI do it? Again, this is only a technical point, it's a server rule 1 violation to make the AI kill itself like that as I understand.
Its lame, but if you got the guy hostage and the AI cannot do anything else to prevent you, then It should do so.
Logged
Pages: 1 ... 830 831 [832] 833 834 ... 1654