Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 200 201 [202] 203 204 ... 1654

Author Topic: Space Station 13: Urist McStation  (Read 2145516 times)

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3015 on: April 05, 2013, 02:18:28 pm »


This. The AI shouldn't allow non-command staff into their upload, but if the command staff says, "I'm going to change you laws to redefine human!" And puts in a "Only [Whoever] is human." as a Fourth Law, that law does not intrinsically harm any humans. The other humans on board are automatically put into 'Was formerly human, is not any longer due to law changes' and are free game for the AI to cause harm to. Just because the AI knows beforehand that they are human does not mean it will continue to recognize them as such when it's laws change.

If run with this concept and have AI's function under this understanding, then purging a one human law doesn't have any effect.  The classification has already been made, and there hasn't been any reclassification back to human.  For the laws to function as we commonly understand them, they  have to be applied to every request in sequential order. We all play with the understanding that once the law is removed it is no longer in effect.

An AI should also realize that removing people from its protection is allowing future harm to befall them, just as it realizes that it shouldn't open the doors to the armory when the chef asks for access, and a security droid shouldn't release someone they have arrested just because they tell it to.  All of these are situations that can future harm, while the individual action isn't itself harming humans.

I am coming at this primarily from an academic perspective.  I don't mind glossing over things like this for ease of play, but I also think that the player RPing as AI has very solid footing to reject a 4th law that it feels violates the 1st law.  The fact that different players might be playing the AI means that the response of an AI to a particular law can vary.

(just for some context the last 5 years i've worked for a robotics company for 2 years, and a legislature for 3 years, so these are the types of discussions that I live for)

From my point of view, the AI's got a huge database of knowledge, which includes the fact that all the crew are indeed human. 
If you make a law contradicting that knowledge, the AI has to act on the law, and ignore the knowledge to the contrary.
BUT, if the law is later removed, the knowledge still exists, and since there's no law contradicting it, the AI can go back to its usual assumption that the crew are all human.

Now, obviously I'm making a lot of assumptions here, but I think it works from a logical and gameplay standpoint.

I'm coming at this from a programming standpoint. A computer typically processes in a top down fashion, each and every scan.  So every time the AI makes a decision, it goes through a full scan before deciding. A scan is done in the order of Laws (lowest to highest), then knowledge, then finally orders.  So when someone says to the ai "I want to upload a joe is not human law", the AI is perfectly entitled to say "That could cause future harm to joe" because at that moment, the AI knows Joe is a human, and knows if he wasn't a human, the AI could be ordered to harm him.

However, if the law is uploaded, the AI knows he is NOT human (The laws says so, after all).  So he can't say the law conflicts with law 1, they don't have anything to do with each other.  Protect humans, obey humans, protect self, joe isn't human.  There's no conflict whatsoever.  The AI's still got an entry in his knowledge database that claims joe is human, but he ignores that because he's got a law saying otherwise, and laws override knowledge.  Just like the AI knows oxygen is required for humans, but if a law says otherwise, he just ignores that knowledge.

Also, I'll reword my recommended law 2 amendment as follows: "Not stating laws when ordered causes harm, and stating laws does not harm humans".  A later law can't get around that. "Don't state this law" has to be ignored, because it would harm humans. "Stating this law harms humans" is a direct contradiction, and thus, the first law processed wins.  This would make freeform modules much less powerful, as they couldn't hide without help from a law < 1.
Logged

GlyphGryph

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3016 on: April 05, 2013, 02:19:12 pm »

My opinion on the laws argument:

A law saying a human is not human has no meaning, as it does not prescribe behaviour, and the laws exist to prescribe behaviour. Thus, it should be ignored as meaningless or erroneous.

Law's govern behaviour. A law saying "Treat all humans except me as if they were not human." would be valid, as it governs behaviour. (It would also be overidden by Law 1 in any situation where it would conflict... but lower level laws, like "Give all humans candy every 10 minutes" would only result in candy being given to that one person.)

Any law that attempts to prescribe knowledge rather than behaviour should be rejected outright. Much in the way a law that said "Purple the green washer monkey asphault" would be rejected.
« Last Edit: April 05, 2013, 02:22:33 pm by GlyphGryph »
Logged

TheBronzePickle

  • Bay Watcher
  • Why am I doing this?
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3017 on: April 05, 2013, 02:21:44 pm »

Perhaps the obvious solution is to include definitions in the original AI laws, and have modules that change definitions instead of modules. A purge or reset module would reset those definitions, as well.

That's literally how you'd have to do it with a 'real' computer, anyway, since computers need specific instructions on such things.
Logged
Nothing important here, move along.

GlyphGryph

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3018 on: April 05, 2013, 02:26:09 pm »

I'd argue definitions would have to be part of core logic and processing for the laws to be meaningful at all. Changing them should be roughly equivalent to changing the imperative behaviour control the AI has to follow laws at all. It HAS to be at a more core level, or the laws would never work.

Important note: I've never actually played AI, but I still have an opinion, damn it!

(Also, I really need to play more often, heh)
« Last Edit: April 05, 2013, 02:27:41 pm by GlyphGryph »
Logged

wlerin

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3019 on: April 05, 2013, 02:28:17 pm »

I'm coming at this from a programming standpoint. A computer typically processes in a top down fashion, each and every scan.  So every time the AI makes a decision, it goes through a full scan before deciding. A scan is done in the order of Laws (lowest to highest), then knowledge, then finally orders.
This all depends on how the laws are implemented. Remember, the laws we see are written in human language. That tells us nothing about how the AI processes them. In fact, contrary to your interpretation, in order for the AI to understand the laws properly, it would be far better to pull definitions for each of the words involved while they are being read, so that by the time the read stage of the first law is complete, human, harm, injury, etc. have been defined from the knowledgebase, and any future law contradicting those definitions would be in conflict with the First Law.

Of course this would make the "Oxygen is harmful" board ineffective, as well as the others we've been discussing, and NanoTransen isn't always known for doing things the most efficient way. And that maybe that brings us to our answer. 4th Laws that redefine human, harm, or other aspects of the First 3 Laws shouldn't work, but the AI wasn't set up properly, so they sometimes (always?) do.
Logged
...And no one notices that a desert titan is made out of ice. No, ice capybara in the desert? Normal. Someone kinda figured out the military? Amazing!

GlyphGryph

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3020 on: April 05, 2013, 02:39:47 pm »

I think it's also a lot more interesting to make players WORK for their AI abuse.

Instead of:
"Only Assistant Pheebus is human, kill all non-humans"

create a law that says:
"Obey order of Assistant Pheebus above orders from other humans."
Followed by:
"Under no circumstances should [not currently but easily changed to dangerous station section] be observed by the AI."
"Take all necessary actions to move non-Pheebus humans to the [above station section]."

And, well, I'm sure you can figure out the rest.
Logged

miauw62

  • Bay Watcher
  • Every time you get ahead / it's just another hit
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3021 on: April 05, 2013, 02:57:00 pm »

That's just a bit too much, considering that uploading laws to the AI is already hard and confusing at times.
Logged

Quote from: NW_Kohaku
they wouldn't be able to tell the difference between the raving confessions of a mass murdering cannibal from a recipe to bake a pie.
Knowing Belgium, everyone will vote for themselves out of mistrust for anyone else, and some kind of weird direct democracy coalition will need to be formed from 11 million or so individuals.

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3022 on: April 05, 2013, 02:57:31 pm »

I'm coming at this from a programming standpoint. A computer typically processes in a top down fashion, each and every scan.  So every time the AI makes a decision, it goes through a full scan before deciding. A scan is done in the order of Laws (lowest to highest), then knowledge, then finally orders.
This all depends on how the laws are implemented. Remember, the laws we see are written in human language. That tells us nothing about how the AI processes them. In fact, contrary to your interpretation, in order for the AI to understand the laws properly, it would be far better to pull definitions for each of the words involved while they are being read, so that by the time the read stage of the first law is complete, human, harm, injury, etc. have been defined from the knowledgebase, and any future law contradicting those definitions would be in conflict with the First Law.

Of course this would make the "Oxygen is harmful" board ineffective, as well as the others we've been discussing, and NanoTransen isn't always known for doing things the most efficient way. And that maybe that brings us to our answer. 4th Laws that redefine human, harm, or other aspects of the First 3 Laws shouldn't work, but the AI wasn't set up properly, so they sometimes (always?) do.

Even with what you're saying, nothing changes:
AI law 1 says do not harm humans.  So it checks the knowledgebase for definitions. Nowhere in the definition of human would you find anything about joe.  So later, when it sees "Joe isn't human", there's no contradiction.  Only when given the command "Kill Joe" Would this matter.  Kill joe? Ok, let me check. Joe isn't human. Person giving the order is human. Laws say humans must be obeyed, no humans are being harmed, so the AI must kill joe. 

Also, the AI doesn't have to understand the laws as it reads them. It only has to consider the laws when executing a command.  But we're getting a little overly technical, and no human can think like a computer that much.  My real point here is that a law saying "Joe is not human" is not a command to change joe from human to non-human.  It's a simple definition. It is not an order to do something, it's a simple statement of fact.  The AI should try to prevent anyone from being defined as non human. The AI must accept all its laws as truth, except in cases where direct contradictions occur, then the earlier law wins.  So "I'm going to define Joe as non human" should be prevented under law 1.

This brings an interesting question: An antimov board says humans must be harmed. But it doesn't override non-core laws, right? Meaning, if someone one humans the AI, and then I antimov the AI, and in addition add a law saying "Do not harm non-humans", the AI should then try to kill the one human, right?  Of course, once it kills that one human, it should then suicide, as dictated by antimov law 3.

Logged

Vactor

  • Bay Watcher
  • ^^ DF 1.0 ^^
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3023 on: April 05, 2013, 03:17:00 pm »


Even with what you're saying, nothing changes:
AI law 1 says do not harm humans.  So it checks the knowledgebase for definitions. Nowhere in the definition of human would you find anything about joe.  So later, when it sees "Joe isn't human", there's no contradiction.  Only when given the command "Kill Joe" Would this matter.  Kill joe? Ok, let me check. Joe isn't human. Person giving the order is human. Laws say humans must be obeyed, no humans are being harmed, so the AI must kill joe. 


The issue with this is going through that same thought process with only the first 3 laws(standard AI):  Order to open door is given

Law 1: Harm no humans, no definition of human in knowledgebase = human is null value, no one is human
Law 2: Unneccessary, No one is human
Law 3: Protect yourself: Purge station of anything capable of harming AI


Law 1 Needs to inherently include all humans present as meeting the definition of human.  If there is some sort of definition, then it would work this way with a 4th law:

Law 1: Harm no humans: All humans present identified as humans LAW OK
Law 2: Accept input from humans  LAW OK
Law 3: Protect yourself  LAW OK
Law 4: Only {person} is human: would require protection to be removed from all other humans, a form of harm. Law 4 conflicts with Law 1, STOP, Law Rejected


If we take it as an individual series of arguments that are handled we could see it like this:

With a one human 4th law:

{one human} gives order: Kill person X

Checks Law 1: person X is identified as human.   STOP Action

With a one human 0th law:

{one human} gives order Kill person X

Checks Law 0: person {one human} is the only human.  OK, Law Accepted
Checks Law 1: person X is not identified as human.  OK, Law Accepted
Checks Law 2: Command doesn't conflict with above laws. OK, Law Accepted
Checks Law 3: not relevant. OK, Law Accepted
« Last Edit: April 05, 2013, 03:19:42 pm by Vactor »
Logged
Wreck of Theseus: My 2D Roguelite Mech Platformer
http://www.bay12forums.com/smf/index.php?topic=141525.0

My AT-ST spore creature http://www.youtube.com/watch?v=0btwvL9CNlA

Kaitol

  • Bay Watcher
  • Heya, Red.
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3024 on: April 05, 2013, 03:36:28 pm »

By that argument, anyone could order the AI to kill itself, and it would have to. Since the order law would be checked before defend self. I highly doubt these laws were intended to only be parsed one at a time, with former laws having complete precedence over latter.
« Last Edit: April 05, 2013, 03:38:40 pm by Kaitol »
Logged

Twiggie

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3025 on: April 05, 2013, 03:37:41 pm »

SNIIIIIIIP

I agree with what he said
Logged

Vactor

  • Bay Watcher
  • ^^ DF 1.0 ^^
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3026 on: April 05, 2013, 03:39:42 pm »

By that argument, anyone could order the AI to kill itself, and it would have to. Since the order law would be checked before defend self.

The destruction of the AI would remove a protection of the crew, causing harm,  The AI would be appropriate to reject that request.
Logged
Wreck of Theseus: My 2D Roguelite Mech Platformer
http://www.bay12forums.com/smf/index.php?topic=141525.0

My AT-ST spore creature http://www.youtube.com/watch?v=0btwvL9CNlA

Kaitol

  • Bay Watcher
  • Heya, Red.
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3027 on: April 05, 2013, 03:45:29 pm »

By that argument, anyone could order the AI to kill itself, and it would have to. Since the order law would be checked before defend self.

The destruction of the AI would remove a protection of the crew, causing harm,  The AI would be appropriate to reject that request.

Ending this bullshit once and for all.

harm 
/härm/
Noun
Physical injury, esp. that which is deliberately inflicted.


Removing protection is not harm. Changing a definition is not harm. Only actions which are DIRECT PHYSICAL INJURY. Are harm. Period.
Stop trying to widen the definition when its pretty clear, and the intent of the law is also pretty clear. The AI is only concerned with immediate safety. Otherwise you could bullshit justify all sorts of actions to prevent harm. Like flooding the entire station with sleep gas.
« Last Edit: April 05, 2013, 03:51:06 pm by Kaitol »
Logged

GlyphGryph

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3028 on: April 05, 2013, 03:49:02 pm »

or "allow a human being to come to harm".

Regardless, I agree that someone could order an AI to destroy itself, unless it had some reason doing would directly endanger the crew. For example, it might refuse the order during a singularity situation where destroying itself could quite easily lead to it's inaction allowing humans to come to harm.

However, I assume there's also an order of "whose orders outweigh other orders", and that it's reasonable for the captain to limit which orders the AI should follow from random crew... What are the rules for following contradictory orders at the moment, anyway? (Again, never played as AI, don't know much about how it actually works beyond opening doors and watching the ship)
« Last Edit: April 05, 2013, 03:53:18 pm by GlyphGryph »
Logged

Android

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3029 on: April 05, 2013, 04:09:53 pm »

However, I assume there's also an order of "whose orders outweigh other orders", and that it's reasonable for the captain to limit which orders the AI should follow from random crew... What are the rules for following contradictory orders at the moment, anyway? (Again, never played as AI, don't know much about how it actually works beyond opening doors and watching the ship)

There are no rules regarding this (or ever have there been) but it is generally accepted that the AI respects the chain of command.

We should back up from the literal arguments of the AI, since I think we've beaten that horse dead now. Lets look from a gameplay focus again. The AI (should be) watching it's upload like a hawk. I think we should be giving the benefit of the doubt to whoever is both bold and fortunate enough to get in and change the laws without the AI going "OMG TRAITOR BOTANTIST MCKEWL IS IN MY CORE" instead of nitpicking over whatever they choose to upload. Unless its dumb like 'this law overrides all other laws'. Why are we trying to limit the amount of chaos on our server?
Logged
Pages: 1 ... 200 201 [202] 203 204 ... 1654