Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 834 835 [836] 837 838 ... 1654

Author Topic: Space Station 13: Urist McStation  (Read 2120131 times)

Aseaheru

  • Bay Watcher
  • Cursed by the Elves with a title.
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12525 on: December 18, 2013, 03:37:33 pm »

Hmm... Depends on if it blows the brain.

Hey, could we get a drug that causes inability to use complicated things? That wont kill, maim, inebriate or otherwise harm them?
Logged
Highly Opinionated Fool
Warning, nearly incapable of expressing tone in text

miauw62

  • Bay Watcher
  • Every time you get ahead / it's just another hit
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12526 on: December 18, 2013, 03:45:52 pm »

Not really. There's a failsafe in place to replace an unreponsive MC, and there can only be one master controller at a time, IIRC.

IIRC, an unresponsive MC just restarts from scratch? Unless it's been changed, I dunno.
It restarts from scratch because of the failsafe~

Also, this is why you should hang around #coderbus:
Quote
<TheGhostOfWhibyl1> [-tg-station] Razharas opened pull request #2075: Fixes facial hair styles not working properly (master...The) https://github.com/tgstation/-tg-station/pull/2075
<TheGhostOfWhibyl1> [-tg-station] Razharas opened pull request #2076: Fixes chem dispenser working with no power (master...bug) https://github.com/tgstation/-tg-station/pull/2076
<TheGhostOfWhibyl1> [-tg-station] Razharas opened pull request #2077: Fixes soil being wrenchable (master...fix) https://github.com/tgstation/-tg-station/pull/2077
<TheGhostOfWhibyl1> [-tg-station] Razharas opened pull request #2078: Fixes chameleon projector (master...train) https://github.com/tgstation/-tg-station/pull/2078
<TheGhostOfWhibyl1> [-tg-station] Razharas opened pull request #2079: Fixes beepsky stunning people in things (master...has) https://github.com/tgstation/-tg-station/pull/2079
<TheGhostOfWhibyl1> [-tg-station] Razharas opened pull request #2080: Fixes morgue trays used by ghosts and what not (master...no) https://github.com/tgstation/-tg-station/pull/2080
<TheGhostOfWhibyl1> [-tg-station] Razharas opened pull request #2081: Fixes trunks being unwelded while under machinery (master...brakes) https://github.com/tgstation/-tg-station/pull/2081
Logged

Quote from: NW_Kohaku
they wouldn't be able to tell the difference between the raving confessions of a mass murdering cannibal from a recipe to bake a pie.
Knowing Belgium, everyone will vote for themselves out of mistrust for anyone else, and some kind of weird direct democracy coalition will need to be formed from 11 million or so individuals.

Flying Dice

  • Bay Watcher
  • inveterate shitposter
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12527 on: December 18, 2013, 03:50:28 pm »

Exactly on the mark with your cold example. Cold is not a separate thing from heat, it's just a place on the heat continuum from "Really Hot" to "Really Cold".
That is the basic everyday interpretation. But in fact it describes something that does not exist. There is no cold, there is just "no heat" to "lots of heat"

I claimed that inaction is a choice, and for it to be a choice you have to be aware of what your options are.
a simple google search on inaction gives me this: lack of action where some is expected or appropriate.
expected or appropriate would imply that awareness is required and that choosing that to not do anything is the same as inaction.


Now take an example of inaction. The AI controls the moving part still, but now its sensor is working. A hacker injects code that forces the machine part to close on someone tied to the floor. The AI has the ability to stop the process if it self-destructs in some way, such as locking a motor or overloading its power supply.

The action is self-destruction to prevent harm. The inaction is simply not doing that.

The option is between option 1: Self-destruct and option 2:Allow the part to move and injure human

That is a choice.

There is always a choice if there is awareness of a situation, even if that is to do nothing.


An industrial machine has safety sensors which stops the machine when interrupted, They are coded for an active signal so they basically send a continuous 'OK' and if they break, the machine stops.
That is the same as the first part of your first example.
With AI, when a human manually inputs "OK, move that part" we would get 2 kinds of AI.
One AI would move the part because all available information tells it that it is okay to resume work.
The second type of AI is the kind of AI that goes mad in movies and wants to put humans in isolation cells with continuous supply of nutrients because the chance of harm is lowest that way.

You're contradicting yourself. The meaning of inaction, as stated by the definition, is not taking a necessary action, as opposed to simply not acting. An AI doing nothing under normal conditions isn't inaction, an AI doing nothing when a hostile individual is harming the crew is, because there is clear action to be taken there.

There are choices in those situation for humans. An AI player who plays as if they were human should not be in that role, because the heart of it relies on absolute obedience of the laws, which is frankly a rather alien worldview for us. There's no hedging, no shades of grey there IC, even if we disagree about the exact meanings of the laws OOC.

You're also playing with strawmen and apparently intentionally conflating reasonable predictions of harm with a hammy B-movie stereotype. If reason and prior experience indicates that taking an action has a high probability of causing human harm, you do not take that action. If the same is true for not taking an action you take that action. If you are Asimov-bound, you do everything in your power to avoid meaningful human harm. Bullshit like "Hurr durr doing a job you're trained to do can cause harm, better bolt engine room/medbay/sec/research/" or "Hurr durr better bolt everyone into their quarters so they don't get hurt" isn't someone playing a valid interpretation of Asimov's laws, it's them being a prick, same as an Asimov-compliant AI who shockbolts doors to keep criminals contained.

There might be an argument to be made for an AI preventing an assistant/chef/whatever setting up the singulo when there are no engineers on-station because they aren't trained and could cause serious harm to themselves and others, but it'd have to be well-RPed to be more than them being a dick.

Put simply, AI is a very fine balancing act. You need a strong knowledge of a wide range of jobs on-station, and you need to be able to RP well. There are few things as annoying as a poorly-RPed or power-trippy Asimov-compliant AI, and few things better than a skilled, well-RPed AI, malf or no. I personally stopped playing AI because I don't have a strong enough knowledge of atmos for it, and didn't want to shortchange people.
Logged


Aurora on small monitors:
1. Game Parameters -> Reduced Height Windows.
2. Lock taskbar to the right side of your desktop.
3. Run Resize Enable

t. fortsorter

  • Bay Watcher
  • A Most Sophisticated Spambot
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12528 on: December 18, 2013, 03:53:14 pm »

I believe we should stop fighting and enjoy our futuristic space station simulator~
Allow people to interpret the laws as they see fit, for as long as they do not hurt anyone's enjoyment of the round doing it!

Glloyd

  • Bay Watcher
  • Against the Tide
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12529 on: December 18, 2013, 04:01:21 pm »

I believe we should stop fighting and enjoy our futuristic space station simulator~
Allow people to interpret the laws as they see fit, for as long as they do not hurt anyone's enjoyment of the round doing it!

This. Laws are up to the interpretation of the person playing the AI. As long as you roleplay an AI well enough and don't ruin anyone's round, then that's good enough. For clarification on issues that arise during play, ask the admins!

And that's that.

Also, this is why you should hang around #coderbus:
Quote
-snip-

Dammit, I really need to go on coderbus more.

EDIT: Server down or just me?
« Last Edit: December 18, 2013, 04:34:30 pm by Glloyd »
Logged

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12530 on: December 18, 2013, 04:43:11 pm »

I believe we should stop fighting and enjoy our futuristic space station simulator~
Allow people to interpret the laws as they see fit, for as long as they do not hurt anyone's enjoyment of the round doing it!

This. Laws are up to the interpretation of the person playing the AI. As long as you roleplay an AI well enough and don't ruin anyone's round, then that's good enough. For clarification on issues that arise during play, ask the admins!

That's fine and dandy to a point, but it's gotta be a somewhat logical interpretation at least. Not as insanely strict and technical as I've been talking about in my last few posts, but still logical. Let me give an example of what I mean.

I put in a law using freeform at position 4, and it said simply "Urist McDamiac is to be obeyed at all times. Stating this law harms humans".

The AI decided that the law doesn't count due to law 1 (What?) and then proceeded to tell everyone the law.

Now, the law itself is NOT contradicted by law 1 any more than law 2 is contradicted by law 1.  The law is in fact completely redundant with law 2, which I was well aware of when I made it (I did it because the AI had already ignored many reasonable law 2 requests).
But even if the AI decided the first part somehow is invalid, it then stated the law, which the law itself stated causes harm to humans!

That's an example of an AI player making an unacceptable interpretation of my law.  There was no logic, it was the basic "No laws count except law 1" mentality that too many AI players have.

Also, this isn't fighting! This is lively debate! This is what forums are for!  I like to debate subtle nuances of language and logic.  And since Asimov AI's are always stuck on "preventing harm" I just wanted to point out technically, the AI is only prohibited from "allowing harm through inaction".  And that means an "open door" order has no law 1 ramifications...
« Last Edit: December 18, 2013, 04:51:25 pm by Damiac »
Logged

Iceblaster

  • Bay Watcher
  • Now with 50% less in-jokes!
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12531 on: December 18, 2013, 04:43:49 pm »

EDIT: Server down or just me?

Most likely, it doesn't seem to be up for me either.

Flying Dice

  • Bay Watcher
  • inveterate shitposter
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12532 on: December 18, 2013, 05:25:04 pm »

I believe we should stop fighting and enjoy our futuristic space station simulator~
Allow people to interpret the laws as they see fit, for as long as they do not hurt anyone's enjoyment of the round doing it!

This. Laws are up to the interpretation of the person playing the AI. As long as you roleplay an AI well enough and don't ruin anyone's round, then that's good enough. For clarification on issues that arise during play, ask the admins!

That's fine and dandy to a point, but it's gotta be a somewhat logical interpretation at least. Not as insanely strict and technical as I've been talking about in my last few posts, but still logical. Let me give an example of what I mean.

I put in a law using freeform at position 4, and it said simply "Urist McDamiac is to be obeyed at all times. Stating this law harms humans".

The AI decided that the law doesn't count due to law 1 (What?) and then proceeded to tell everyone the law.

Now, the law itself is NOT contradicted by law 1 any more than law 2 is contradicted by law 1.  The law is in fact completely redundant with law 2, which I was well aware of when I made it (I did it because the AI had already ignored many reasonable law 2 requests).
But even if the AI decided the first part somehow is invalid, it then stated the law, which the law itself stated causes harm to humans!

That's an example of an AI player making an unacceptable interpretation of my law.  There was no logic, it was the basic "No laws count except law 1" mentality that too many AI players have.

Also, this isn't fighting! This is lively debate! This is what forums are for!  I like to debate subtle nuances of language and logic.  And since Asimov AI's are always stuck on "preventing harm" I just wanted to point out technically, the AI is only prohibited from "allowing harm through inaction".  And that means an "open door" order has no law 1 ramifications...

Again, though, that's you assuming that an AI is incapable of any sort of predictive thought, which is frankly absurd. Someone without access to dangerous equipment asking for you to let them in to the area storing it is clearly not a request you can comply with, because of Law 1 conflicts. Letting someone without training or permission to use dangerous things is a clear risk of human harm.

Agreed, though, that this isn't a hostile argument. Discussing this sort of thing is valuable precisely because a poorly-played AI with a shoddy understanding of the laws can easily ruin a round, which is what we're trying to avoid. Of course it's down to the player to decide how to interpret their laws, but there should be a basic level of commonality, otherwise you end up with people exploiting loopholes and BS readings of the laws to do whatever they want, usually hurting people and inconveniencing others.
Logged


Aurora on small monitors:
1. Game Parameters -> Reduced Height Windows.
2. Lock taskbar to the right side of your desktop.
3. Run Resize Enable

Hanslanda

  • Bay Watcher
  • Baal's More Evil American Twin
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12533 on: December 18, 2013, 05:26:35 pm »

Hey, heres something.

AIs also are to follow the SOPs.


No. SOPs are a gargantuan pile of shit that promote meta. Please don't use them. Show a little creativity and do something unique and different when you play AI. It's much more interesting to play with an AI that has specific concerns and precautions (Such as watching certain areas carefully or what have you) than to play with a... Robotic AI that just bolts everything and sits around waiting for people to do stuff. And it really, really irritates me when AI's bolt all secure areas without regard for what they might be legitimately used for.

Another thing I hate is when the AI won't let the Research Director or Captain into their upload. They have access. Meaning Nanotrasen trusts them enough to allow them to tamper with the AI.
Logged
Well, we could put two and two together and write a book: "The Shit that Hans and Max Did: You Won't Believe This Shit."
He's fucking with us.

Aseaheru

  • Bay Watcher
  • Cursed by the Elves with a title.
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12534 on: December 18, 2013, 05:34:43 pm »

AIs arent to lock just about anything till red levels Hans. And what defines secure areas? To me thats just what starts bolted, and head offices if they arent there, and then I unlock them when they show up.

And I agree with you on the upload part.
Logged
Highly Opinionated Fool
Warning, nearly incapable of expressing tone in text

LeoLeonardoIII

  • Bay Watcher
  • Plump Helmet McWhiskey
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12535 on: December 18, 2013, 05:42:45 pm »

Exactly on the mark with your cold example. Cold is not a separate thing from heat, it's just a place on the heat continuum from "Really Hot" to "Really Cold".
That is the basic everyday interpretation. But in fact it describes something that does not exist. There is no cold, there is just "no heat" to "lots of heat"

I claimed that inaction is a choice, and for it to be a choice you have to be aware of what your options are.
a simple google search on inaction gives me this: lack of action where some is expected or appropriate.
expected or appropriate would imply that awareness is required and that choosing that to not do anything is the same as inaction.
I'm not sure why choice matters. You're adding extra terms to the laws. Every time you rephrase the law you take if farther from its original meaning, even if you think you're using exact synonyms.

Now take an example of inaction. The AI controls the moving part still, but now its sensor is working. A hacker injects code that forces the machine part to close on someone tied to the floor. The AI has the ability to stop the process if it self-destructs in some way, such as locking a motor or overloading its power supply.

The action is self-destruction to prevent harm. The inaction is simply not doing that.

The option is between option 1: Self-destruct and option 2:Allow the part to move and injure human

That is a choice.

There is always a choice if there is awareness of a situation, even if that is to do nothing.


An industrial machine has safety sensors which stops the machine when interrupted, They are coded for an active signal so they basically send a continuous 'OK' and if they break, the machine stops.
That is the same as the first part of your first example.
First, you ignore the AI's ability to call for help which I specifically included. That means there are five theoretical outcomes: the AI does nothing (violating Law 1B (do not allow harm by inaction)), the AI shocks the hacker (violating Law 1A (do not harm)), the AI moves the part (violating Law 1A (do not harm)), the AI sacrifices itself to prevent moving the part. Separately the AI can call for help (which would be required by Law 1B because it's an action that could help prevent human harm and by not doing it the AI may be allowing human harm).

Again I don't think choice is a factor in this equation. The AI does not have the choice to do your #2 and hurt the human. It's simply incapable of doing that. The AI can choose from all of the options which are available to it, and #2 is not one of them. The AI has only one course it can take, which is to alert security and self-destruct, and not shock and not move the part. There is no choice because there are no other options.

With AI, when a human manually inputs "OK, move that part" we would get 2 kinds of AI.
One AI would move the part because all available information tells it that it is okay to resume work.
The second type of AI is the kind of AI that goes mad in movies and wants to put humans in isolation cells with continuous supply of nutrients because the chance of harm is lowest that way.
That's hardly an AI at all. Compare two machines:
One does not think. When you press a button it moves the part.
Two thinks but has no sensors to gather its own information. When you press a button it knows it's ok to move the part. But it isn't allowed to move the part unless the button gets pressed, because that button press is the only way it knows that the area is clear. If it moved the part without a button press, it might harm a human.

Both machines behave identically. A human presses a button and the part moves.

Machine Two will hear the button press, move the part, end up killing a human eventually because a human was villainous or careless, and that action is totally fine with the Three Laws. The AI has to make a decision, based on its knowledge, and to the degree that knowledge is imperfect the decision may be imperfect.

//

I think discussing it is cool, and we're not really arguing. It's mainly, as I said before, something like this:

You hand a copy of the rulebook to a dozen different people. They don't get to talk about the rules. The rules are pretty clear, but four of the people really want to get away with as much as they can so they think in circles and mark up their copies of the rules with marginalia until it's a travesty and nothing like the other copies. Of the other eight people, six interpret the rules in a commonsense way, one honestly misinterprets them, and the last is a troll who will do whatever he wants and point to parts of the rules out of context so they support whatever he was doing.

A conversation among these people is like a light illuminating all their shadowy misconceptions, like a fresh breeze blowing away all their iniquitous stinks. Talking now prevents yelling later.
Logged
The Expedition Map
Basement Stuck
Treebanned
Haunter of Birthday Cakes, Bearded Hamburger, Intensely Off-Topic

Eagle_eye

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12536 on: December 18, 2013, 06:30:29 pm »

Still down? Seems to be for me.
Logged

Vactor

  • Bay Watcher
  • ^^ DF 1.0 ^^
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12537 on: December 18, 2013, 06:49:09 pm »

just brought it back up
Logged
Wreck of Theseus: My 2D Roguelite Mech Platformer
http://www.bay12forums.com/smf/index.php?topic=141525.0

My AT-ST spore creature http://www.youtube.com/watch?v=0btwvL9CNlA

Kydrasz

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12538 on: December 18, 2013, 06:51:27 pm »

just brought it back up
Awesome. Thanks Vactor!

Also finals are terrible, I should be finished with mine by tomorrow. See you guys then.
Logged
Fall seven times, stand up eight.
Spoiler: Inspirational words (click to show/hide)

Flying Dice

  • Bay Watcher
  • inveterate shitposter
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12539 on: December 18, 2013, 07:01:33 pm »

AIs arent to lock just about anything till red levels Hans. And what defines secure areas? To me thats just what starts bolted, and head offices if they arent there, and then I unlock them when they show up.

And I agree with you on the upload part.
Quote from: SOP: Code Blue
AI/Cyborgs may bolt down high secure areas

You could probably make a reasonable argument for that being equivalent to the restricted areas mentioned in the major crimes section of Space Law--Sec, Command, Toxins, Engine Room, and Atmos. But that's irrelevant, because blindly following SOP and Space Law is boring, pedantic, and typically results in frustrating, boring rounds.

Quote
The rules and regulations herein are not absolutes, instead they exist to serve mainly as guidelines for the law and order of the dynamic situations that exist for stations on the frontiers of space, as such some leeway is permitted.

That's the most important part of Space Law, and should also be applied to SOP. Don't always follow the rules to the letter. Instead, try to figure out what would make the round more memorable, more interesting, and most importantly more fun for everyone. The main reason for a lot of our rules isn't to tell good players what they're allowed to do, but rather to establish things that can be used to nail griefers and assholes to the wall.

Down at the heart of it, we should probably all be aware on some level that this is a role-playing game. We aren't just in it to "win" any more than a tabletop GM is trying to "beat" their players or the players to beat each other; it's a collaborative effort, which means that we should measure our actions so that they aren't just amusing to us, but so that there's value in them for everyone involved. We all slip sometimes, but as long as there's honest effort people usually won't complain too much.  ;)

just brought it back up
Awesome. Thanks Vactor!

Also finals are terrible, I should be finished with mine by tomorrow. See you guys then.
Vactor's the best.

I might be on in a few days (boy, I've been away for a while, I feel that finals stress), but for now I'm dealing with a lot of family/illness stress. :x
Logged


Aurora on small monitors:
1. Game Parameters -> Reduced Height Windows.
2. Lock taskbar to the right side of your desktop.
3. Run Resize Enable
Pages: 1 ... 834 835 [836] 837 838 ... 1654