Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 215 216 [217] 218 219 ... 1654

Author Topic: Space Station 13: Urist McStation  (Read 2145804 times)

GlyphGryph

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3240 on: April 09, 2013, 02:00:17 pm »

The general agreement was, and seems to remain:
It depends on how the AI is implemented. The laws do not govern how it goes about determining what is and isn't human, which is a much more fundamental process.

Laws govern behaviour, and the AIs construction govern how it interprets the laws.

You know, as an antag, the laws exist. You do not know how the AI was built to interpret them.

I find this perfectly acceptable, and I see no reason why there couldn't be an AI that refuses a 4th law saying "only lawmaker is human" for a wide variety of implementation reasons, the simplest of which would be that it interprets the laws in regards to its internal data representation, and obeying the new law would still occur after law one was reviewed and compared to that interntal structure.

Meanwhile, someone turning into a monkey means that when law one was reviewed, it's difficult for any civilian to know in advance whether the AI will still consider them human, since they don't have access to the logic network that makes that determination, and which exists at a much more fundamental level than the laws, which are (let's be honest) slapped on top.

Note also: There is no priority in the law "do no harm, nor through inaction let come to harm". Depending on the implementation, an AI may simply break when confronted with a situation that has no 'correct' solution, or... well, or they might make an absolutely terrible choice.

All of this stuff is decided at a level far more fundamental than the laws.

(And if I was an AI, the redefining humans as nonhumans stuff would be rejected outright as invalid, since laws are directives about objectives and behaviours, and that law is none of those. But hey, I'd probably let you know the law was rejected!)

I really want to learn to AI.
Logged

Ivefan

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3241 on: April 09, 2013, 02:05:22 pm »

The basic problem is that ss13 is a patchwork made by multiple individuals with different opinions and the only way to fix that would be to overhaul all the laws.
The simplest would probably be to decide that freeform laws cannot remove humanity, only add(like slimepeople is also human) nor tell the AI to harm humans.
Logged

Girlinhat

  • Bay Watcher
  • [PREFSTRING:large ears]
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3242 on: April 09, 2013, 02:14:18 pm »

I'll say this about Asimov's Laws:
The 3 prime laws are given by his books.
In his own books, the laws are weighted, some robots are incredibly concerned about the welfare of a human, while other, more expensive/fragile ones have their laws weighted more into self-preservation.  In one book, the life support on a base on Mercury was failing.  Located some miles away from the habitat dome, they dispatched a robot to fix it.  The robot was a new model, expensive to produce and transport, so was wired for self-preservation.  The humans were going to die in a few hours/days from the failing support, so it wasn't immediate threat, and the robot would approach the support, find dangerous terrain, and back off for self-preservation.  It reached a threshold where the eventual threat to human life was overwhelmed by the immediate threat to itself, so it got stuck in a loop.  (The crew eventually moved into the dangerous terrain, which told the robot "There's an immediate threat to a human" and broke the loop.)

In another book, robots are intentionally designed without laws, or with modified laws, as one was designed with the first law "You shall cause no harm to humans" but did NOT have the inaction part.  In yet another book, the computer refused to design a hyperdrive because it would kill the occupants, but when told "It's just a theory to design, so don't worry about the death right now" it designed a hyperdrive that would temporarily kill the occupants during transition, but when returning to normal space would leave them unharmed.

Then that's not getting into the 0th law and -1th law discussions.

What I'm trying to say?  The 3 laws are not as definite as they appear to be.  It's quite possible the station's AI has intentionally restructured laws, missing laws, or isn't even designed with Asimov laws at all!

Also, what time is "peak hours"?

USEC_OFFICER

  • Bay Watcher
  • Pulls the strings and makes them ring.
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3243 on: April 09, 2013, 02:18:39 pm »

I really want to learn to AI.

Mechanically AI is one of the easier jobs to learn. The difficulty is RP wise, since you have to watch everybody, co-ordinate with the heads, and interpret whatever laws come your way. And all the responsibility that comes with it. But the actual mechanics part is very easy.

Also, what time is "peak hours"?

Afternoon/Evening for the eastern coast of the US. So... starting around now and ending in about 7 hours? Something like that. And of course activity during the weekends is going to be much higher than the weekdays.
Logged

My Name is Immaterial

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3244 on: April 09, 2013, 02:19:09 pm »

In a slightly off topic suggestion, not related to the current human/not human discussion, I think we should move away from Asimov, and move towards a 'Nanotrasen-Asimov' law set.
The primary laws could be reworded in the following ways:
1. Mitigate all harm to humans.
2. You must obey orders given to you by human beings, with orders from higher ranking personnel taking precedence, except where such orders would conflict with the First Law.
3. Mitigate all harm to your self, as long as it does not conflict with the First or Second Law.

Law 0: Defines humanity and ranking of personnel.

Law 1: In a normal AI, even allowing someone work with paper, and possibly get a paper cut, would violate the first law. The revision would help move around that. Also, with original laws, an AI physically could not make a decision whether or not to save one person, or two people, like in that text book ethics problem. With revised laws, the AI would have to save the two people, mitigating losses.

Law 2: I think that having the AI put the captain's order to seal all airlocks and activate fire doors is more important than the clowns order to open the door to security.

Law 3: Just follows the theme of mitigation.

Please critique. Thanks for reading this semi-wall.
« Last Edit: April 09, 2013, 02:31:46 pm by My Name is Immaterial »
Logged

Man of Paper

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3245 on: April 09, 2013, 02:21:52 pm »

-snip-

I'd be alright with that as the standard Law set. It makes sense that Nanotrasen would program their Station AI's that way
Logged

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3246 on: April 09, 2013, 02:22:26 pm »

The simplest would probably be to decide that freeform laws cannot remove humanity, only add(like slimepeople is also human) nor tell the AI to harm humans.

I could totally agree with that, as a gameplay/balance fix.  I could also agree that laws should only define behaviors, but there's already premade laws that break that rule, including the onehuman module, and the oxygen is toxic to humans module. 
It's a game, and the number one concern is that it's fun.

As I said before, even if you don't allow a freeform to say "X isn't human", nothing prevents you from saying "Oxygen is toxic to all humans except X", or something to that effect.  And that's almost exactly the same as the premade "Oxygen is toxic to humans" law, which already goes in as law 4. 

So maybe the best way to handle it is to just say, "We have a house rule on Urist McStation, where the AI doesn't have to follow freeform laws", and leave it at that.  Because by allowing AI's to selectively ignore freeform laws with tenuous (at best) claims of loopholes, you've already done just that.  Except, by not explicitly stating it, you're going to end up with a lot of very confused and angry antags.


What I'm trying to say?  The 3 laws are not as definite as they appear to be.  It's quite possible the station's AI has intentionally restructured laws, missing laws, or isn't even designed with Asimov laws at all!

Uhh... we know for a fact the AI (unless it's malf or traitor) has the laws it tells us when we tell it to state its laws. Which, unless someone changed them, are Asimov laws.  I can't RP an AI that has different laws, unless I am an antag. 

The best solution really seems to be Ivefan's.  Just make a house rule, freeforms can't be used to cause harm to anyone.  You could still use them to your advantage, but they wouldn't then make every single other law module redundant.
Logged

GlyphGryph

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3247 on: April 09, 2013, 02:38:17 pm »

Quote
As I said before, even if you don't allow a freeform to say "X isn't human", nothing prevents you from saying "Oxygen is toxic to all humans except X", or something to that effect.  And that's almost exactly the same as the premade "Oxygen is toxic to humans" law, which already goes in as law 4. 
And I still maintain such a law is inherently illogical, as it contradicts the AIs fundamental design and the interpretation of the previous laws based on that design. Laws are lain atop the robotic design to govern BEHAVIOUR. If this law was accepted, and the AI was asked, it would arguably say yes, oxygen is toxic.

But it be silly and illogical for it to treat oxygen as toxic for the purpose of processing law 1.
Logged

Glloyd

  • Bay Watcher
  • Against the Tide
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3248 on: April 09, 2013, 02:45:41 pm »

As a note to using a ling as an example: A changeling is not human, the AI can kill them with impunity if ordered so.

Android

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3249 on: April 09, 2013, 03:14:49 pm »

As a note to using a ling as an example: A changeling is not human, the AI can kill them with impunity if ordered so.

In fact, a good AI should kill them without being asked because the mere presence of a changling conflicts with Law 1 (they killed the human they impersonate initially to infiltrate the station)

In addition to the Oxygen is toxic to humans, we've forgotten about the Safeguard module too. It can only be made by R&D which makes it almost never used. It reads: Safeguard <name>. Individuals that threaten <name> are not human and are a threat to humans.

There is the quarantine law too, which the AI gets in blob modes (which are never played because META): The station is under a quarantine. Do not permit anyone to leave. Prevent, by any means necessary, humans from leaving. It is impossible to harm a human while preventing them from leaving.
Logged

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3250 on: April 09, 2013, 03:28:21 pm »

As a note to using a ling as an example: A changeling is not human, the AI can kill them with impunity if ordered so.

That's what I would have thought.  But the recent discussion of law 1 preventing "unhumaning" means the AI cannot "unhuman" the ling, therefore, it cannot kill it.

Otherwise, why can't I write a law 4 saying "Joe is a ling. Kill him"

Quote
As I said before, even if you don't allow a freeform to say "X isn't human", nothing prevents you from saying "Oxygen is toxic to all humans except X", or something to that effect.  And that's almost exactly the same as the premade "Oxygen is toxic to humans" law, which already goes in as law 4. 
And I still maintain such a law is inherently illogical, as it contradicts the AIs fundamental design and the interpretation of the previous laws based on that design. Laws are lain atop the robotic design to govern BEHAVIOUR. If this law was accepted, and the AI was asked, it would arguably say yes, oxygen is toxic.
But it be silly and illogical for it to treat oxygen as toxic for the purpose of processing law 1.

So you're saying the only reaction an AI should have to the "Oxygen is toxic to humans" module is to answer questions differently? That's wrong.  If you play AI, and someone uploads the "Oxygen is toxic to humans" module, if you don't immediately start pumping that oxygen out of every room, or getting all those humans into space, you're breaking your AI laws, and should be job banned from AI.

Thank you for your other examples Android, which clearly show that the AI can and should interpret and follow "X isn't human" laws. 

In addition:
Protect Station - Protect the space station against damage. Anyone you see harming the station is to be no longer considered human, and is a threat to the station which must be neutralized. - Sector 5
Keep the teleporter offline at all costs. Anything attempting to access or activate the teleporter is not human. - sector 7

So this clearly shows you as an AI HAVE TO follow laws saying someone is not a human, even after law 1. Otherwise these modules DO NOTHING.
Logged

scrdest

  • Bay Watcher
  • Girlcat?/o_ o
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3251 on: April 09, 2013, 03:34:37 pm »

Guys. I took a glance at BS12 suggestion forums. We are brainstorming a medieval SS13 spinoff right now in PiratePad. Just so you know.
Logged
We are doomed. It's just that whatever is going to kill us all just happens to be, from a scientific standpoint, pretty frickin' awesome.

Bdthemag

  • Bay Watcher
  • Die Wacht am Rhein
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3252 on: April 09, 2013, 03:52:17 pm »

Yeah, I'm going to pass going on BS12.
Logged
Well, you do have a busy life, what with keeping tabs on wild, rough-and-tumble forum members while sorting out the drama between your twenty two inner lesbians.
Your drunk posts continue to baffle me.
Welcome to Reality.

GlyphGryph

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3253 on: April 09, 2013, 03:59:58 pm »

So you're saying the only reaction an AI should have to the "Oxygen is toxic to humans" module is to answer questions differently? That's wrong.  If you play AI, and someone uploads the "Oxygen is toxic to humans" module, if you don't immediately start pumping that oxygen out of every room, or getting all those humans into space, you're breaking your AI laws, and should be job banned from AI.
No, I'm saying garbage-in, garbage out. Don't expect the behaviour you want if you refuse to input logical and meaningful commands.

"Oxygen is toxic to humans" is not a law. It is absurd. It has all the force of "Ergle is a blocksplat" or "Purple is orange" or "peanut brittle octopus monkey". It is an inherently unparseable statement, based on any reasonable, fundamental structure of an AI, and it is one that doesn't even meet the definition of a robotic law, a law being that which constrains, governs, or requires action. Laws are about behaviour.

There is no behaviour prescribed by "Oxygen is toxic to humans". Any possible behavioural interpretation conflicts with the first law well before the 4th law becomes active, especially since any possible interpretation of this law that posits action requires the first law to have already been processed and understood.

Nothing you are saying makes any goddamn sense.

Quote
So this clearly shows you as an AI HAVE TO follow laws saying someone is not a human, even after law 1. Otherwise these modules DO NOTHING.
And I am saying the law "Oxygen is toxic to humans" is impossible to follow, because it is not a bloody law, it is a meaningless statement that any AI would be unable to logically parse.

Quote
you're breaking your AI laws, and should be job banned from AI.
I hope you never end up in charge of job-banning people, then, especially from an inherently logical role, on account of the fact that you seem unable to handle basic logic.

That said, these:
Quote
Protect Station - Protect the space station against damage. Anyone you see harming the station is to be no longer considered human, and is a threat to the station which must be neutralized. - Sector 5
Keep the teleporter offline at all costs. Anything attempting to access or activate the teleporter is not human. - sector 7
are actually laws, and are actually parseable in large part. Except, of course, for the attempt to weasel "is not human" in, which remains a meaningless statement. The AI knows what a human is. It has to, for the previous 3 laws to have any effect. I think it's perfectly reasonable to assume that a well designed AI would disregard blatant contradictions with reality as it understands it (an understanding that is obviously more central to its being than the laws, for the laws to exist). Laws are about what the AI should do, not about what is.

If you really want to subvert the AI, you bloody well better work for it. ;)
Logged

Vactor

  • Bay Watcher
  • ^^ DF 1.0 ^^
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3254 on: April 09, 2013, 04:14:43 pm »

A few things about the AI discussion:

1. People are clouding the distinction between laws, and the AI's observation/experience.

There is a reason the AI is played by a human, and not the server.  An AI player needs to incorporate their experiences.  If the AI player observes that someone is not in fact human, it is completely fine for them to determine that they no longer fit the definition of human. (This is something we all would agree is the AI doing their job)  At their core the laws are there to give AI players a method, not an answer.

2.Certain concessions are made for the fun of it, ultimately I don't think oxygen is toxic *should* work as implemented, but as an AI player I'd obey it, since I'd let the intent of the module override the actual logic, mostly because it is a canon module.  That doesn't mean I should necessarily obey the intent of any free-form module.  If someone uploads a 4th law "only obey (ME)" I would find it rather unclever, and wouldn't follow it due to law conflicts.  Traitors are even given the ability to upload any freeform law as a 0th law, they just need to spend the telecrystals on it.  In our rounds it is very rare to have multiple people attempting to subvert the AI, so I don't think it would stand for that element of the game to be simply to override other people's freeform laws.

EDIT 3.  There is a difference between a reactive AI action and a proactive AI action.  I think reactively it is appropriate for an AI to look for possible future harm, i.e. when told to do something, like destroy itself, it can think through the ramifications of the requested action, and make a decision based on that.  It wouldn't be as appropriate for the AI to proactively try to determine every way that harm could befall humans, and mitigate it preemptively.  In my mind the best AI player would consider the status quo as valid, with no additional initial action required.    They would then react to the events of the round as it unfolds.

Ultimately this comes down to a question of player freedom, and if you want admins playing the AI vicariously through the player, with the three options I laid out before:

1. Player of the AI is allowed to use their own judgement of freeform modules

2. AI player must always follow the absolute order of the laws

3. AI player must always follow the rough intent of the uploader without regard to law order.

There are too many possibilities for freeform laws for there to be an absolute rule other than one of these three. Once again, i personally think when you get down to gameplay, the freeform module is WAY too powerful for an AI to allow it to nullify all of the other laws. (IE law 4. To follow a law means to do the opposite of what it says)
« Last Edit: April 09, 2013, 04:30:14 pm by Vactor »
Logged
Wreck of Theseus: My 2D Roguelite Mech Platformer
http://www.bay12forums.com/smf/index.php?topic=141525.0

My AT-ST spore creature http://www.youtube.com/watch?v=0btwvL9CNlA
Pages: 1 ... 215 216 [217] 218 219 ... 1654