Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 832 833 [834] 835 836 ... 1654

Author Topic: Space Station 13: Urist McStation  (Read 2120344 times)

Tsuchigumo550

  • Bay Watcher
  • Mad Artificer
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12495 on: December 17, 2013, 06:31:55 pm »

AIs can be made with four human brains, right?

I now want to play as an AI with four different personalities, all of them have varying degrees of usefulness, and are triggered by various events. All are helpful, say, in reasonably opening doors. One will ask for a reason, one will do it then watch, one will open the door and alert the most closely related authority, one will do it and leave.

For unreasonable demands (let me into the armory 4no raisin),the same order: Flatly deny and alert station over radio, open door then trap trespasser, flatly deny and more quietly alert authorities, or deny once and stop watching.
Logged
There are words that make the booze plant possible. Just not those words.
Alright you two. Attempt to murder each other. Last one standing gets to participate in the next test.
DIRK: Pelvic thrusts will be my exclamation points.

Mapleguy555

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12496 on: December 17, 2013, 06:42:20 pm »

You just need one in an MMI, I think. Just make more AIs and name them the same.
Logged

Tsuchigumo550

  • Bay Watcher
  • Mad Artificer
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12497 on: December 17, 2013, 06:43:40 pm »

Maybe we could have a bootleg-ass Territory round where each section has their own AI and you can grant control by making cameras
Logged
There are words that make the booze plant possible. Just not those words.
Alright you two. Attempt to murder each other. Last one standing gets to participate in the next test.
DIRK: Pelvic thrusts will be my exclamation points.

LeoLeonardoIII

  • Bay Watcher
  • Plump Helmet McWhiskey
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12498 on: December 17, 2013, 07:09:38 pm »

AIs can be made with four human brains, right?

I now want to play as an AI with four different personalities, all of them have varying degrees of usefulness, and are triggered by various events. All are helpful, say, in reasonably opening doors. One will ask for a reason, one will do it then watch, one will open the door and alert the most closely related authority, one will do it and leave.

For unreasonable demands (let me into the armory 4no raisin),the same order: Flatly deny and alert station over radio, open door then trap trespasser, flatly deny and more quietly alert authorities, or deny once and stop watching.
I don't see why the AI is even letting people into places. If you're authorized to be there, you will have a key. If you don't have a key, you're not authorized. If you lack authorization but desire it, there is a human who can help you and his office is this way. If this is a life-threatening emergency and the human authorizer is unavailable, the AI needs to consider if the access be used to harm humans?

For example, if an alien is attacking and nobody has access to armory and can't get access, the AI should be all over handing out guns. Kill the alien, save the humans.

But if some traitors are attacking loyal crewmen, handing out guns will only lead to human harm. The Ai isn't there to decide whether it's self defense, or the relative value of these two humans, or harm coming to itself from one human but not the other (meaning one human is an ally and the other is an enemy). They're all humans and the AI shouldn't do anything to contribute to harm happening to them.

I think most of the AI loophole assholery is mostly AIs trying to find any excuse they can to harm humans, or at least inconvenience them as much as possible.

I think of an AI as an intelligent system admin for the station. The all-seeing assistant.

AI should be like, "Hey HoS there's a guy breaking into armory" because maintaining the passive defenses is one of the AI's jobs. It's a task the AI can do and probably should work on. What else will you do, spy on the chef making clownburgers and forcefeeding them to a corgi?

Map idea: clowns are no longer human. Everyone is a clown. Try to survive as long as possible!
Logged
The Expedition Map
Basement Stuck
Treebanned
Haunter of Birthday Cakes, Bearded Hamburger, Intensely Off-Topic

Nienhaus

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12499 on: December 17, 2013, 07:10:22 pm »

I worked on this for a hard long 20 minutes.
http://puu.sh/5Q6UL.png
New map!
Logged

Corai

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12500 on: December 17, 2013, 07:21:01 pm »

I heard Nar'Sie and lore in one post.

Anyone up for spamming the thread worldbuilding for fun?
Logged
Jacob/Lee: you have a heart made of fluffy
Jeykab/Bee: how the fuck do you live your daily life corai
Jeykab/Bee: you seem like the person who constantly has mini heart attacks because cuuuute

LeoLeonardoIII

  • Bay Watcher
  • Plump Helmet McWhiskey
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12501 on: December 17, 2013, 07:21:31 pm »

No depressing single ornament in the illicit bar?
Logged
The Expedition Map
Basement Stuck
Treebanned
Haunter of Birthday Cakes, Bearded Hamburger, Intensely Off-Topic

Corai

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12502 on: December 17, 2013, 08:32:21 pm »

Server is back up.
Logged
Jacob/Lee: you have a heart made of fluffy
Jeykab/Bee: how the fuck do you live your daily life corai
Jeykab/Bee: you seem like the person who constantly has mini heart attacks because cuuuute

scrdest

  • Bay Watcher
  • Girlcat?/o_ o
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12503 on: December 18, 2013, 03:56:29 am »

I heard Nar'Sie and lore in one post.

Anyone up for spamming the thread worldbuilding for fun?

I would say that we should do more of it in-game during Cult but goddammit, I'm never on when there's a playable amount of people.

Some things are a given, though:
  • He's referred to as 'Geometer of Blood'
  • His cults go way back, since cult structures are among ruins on Asteroid
  • His existence is vaguely known, suggesting that Chaplains or other people with a degree of magical/metaphysical/occult knowledge have been occasionally fighting him.
Logged
We are doomed. It's just that whatever is going to kill us all just happens to be, from a scientific standpoint, pretty frickin' awesome.

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12504 on: December 18, 2013, 09:16:00 am »

AIs can be made with four human brains, right?

I now want to play as an AI with four different personalities, all of them have varying degrees of usefulness, and are triggered by various events. All are helpful, say, in reasonably opening doors. One will ask for a reason, one will do it then watch, one will open the door and alert the most closely related authority, one will do it and leave.

For unreasonable demands (let me into the armory 4no raisin),the same order: Flatly deny and alert station over radio, open door then trap trespasser, flatly deny and more quietly alert authorities, or deny once and stop watching.
I don't see why the AI is even letting people into places. If you're authorized to be there, you will have a key. If you don't have a key, you're not authorized. If you lack authorization but desire it, there is a human who can help you and his office is this way.

But... there are 3 laws, not just 1. The AI is letting people into places because a human being ordered the AI to let them in. Now, it's pretty acceptable to deny these requests if there's some reasonable expectation of harm, but in the absence of the possibility of harm, the AI should open the door. Every single time.  The AI does not have laws saying anything about access or authorization.

That said, I think it's a pretty acceptable break from the extreme hard logic I've been talking about for the AI to first ask for confirmation from another crew member.  Then, if they say no, you're still following law 2 when you don't open the door, because another crew member asked you to.

This idea that nanotrasen already gave the AI some standing orders is a nice handwave, but where the hell are these pre-existing orders spelled out? Can I say my AI character was told by nanotrasen that there are traitors aboard, and I have to stop them by bolting all the valuables? 

Quote
Quote

It simply says the AI cannot injure a human, and that the AI cannot just stand there and do nothing when humans are being harmed.


You have, in your posts today, paraphrased the law several different ways. This is an interpretation. Here is the law: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Allow to come to harm implies an ability to look ahead and evaluate outcomes. you just paraphrased it as if Law one suggests that an AI must only act if harm is immediately ongoing.

Through Inaction, Allow a human being to come to harm.  That's what I keep focusing on.  So yeah, the AI can look ahead, to determine if there are actions it should take to disallow human harm.  But when the order comes over the radio "Open this door AI", that part of the law simply does not apply. But law 2 applies very much. So to follow their laws, the AI should open this door.  Because opening the door is in no way "Through inaction, allowing human harm".  Nor does it "Injure humans".  Then, after following law 2, the AI can consider whether there are any actions necessary to disallow human harm.

Again, I don't expect people to play this way, because it's really a lot of hoops to jump through, and it's really not how people have traditionally played AI. But what I'm really trying to point out is that THERE IS A LAW 2! In any situation where law 1 does not apply, law 2 says "Follow the order".  So if I say "AI, let me in the janitors closet", there's no reason for you to refuse.  You have to let me in to follow your laws.

And all this isn't to say "You must play AI this way".  Remember, AI laws do not determine how you have to play your AI. They just determine actions you must/must not take in response to certain situations, everything else is totally your call.  The AI may be built with a human brain, but it's not played like a human crewmember.  You have some freedom of interpretation, but the main thing about playing a silicon is that you have your laws, and they override everything else, including common sense, and a desire to see traitors fail.

The most fun I ever had as an Asimov AI is when I had a traitor locked down on the bridge.  He said "AI, let me go".  I wanted to follow law 2, but it was crucial nobody got harmed.  I was also concerned what harm might befall him, or someone else, when someone eventually came to arrest him.  So I bolted all the doors in arrivals, warned the crew to stay out of the area, set the teleporter to arrivals, and unbolted and opened the doors that led him that way. In the end, I locked him in a pod, with no harm done, and law 2 followed.  It's an easy way out of all law 2 requests to just say "Nope! Possible harm!".  It's a lot more fun to try to find a way to follow law 2, without compromising law 1.  The traitor got his greentext, and I followed my laws.  Fun for all.
Logged

wlerin

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12505 on: December 18, 2013, 10:34:54 am »

But... there are 3 laws, not just 1. The AI is letting people into places because a human being ordered the AI to let them in. Now, it's pretty acceptable to deny these requests if there's some reasonable expectation of harm, but in the absence of the possibility of harm, the AI should open the door. Every single time.  The AI does not have laws saying anything about access or authorization.
But the AI does have existing orders that supercede those of Joe Random, both from NT and (sometimes, if they remember to do so) from heads of staff.

This idea that nanotrasen already gave the AI some standing orders is a nice handwave, but where the hell are these pre-existing orders spelled out?
http://wiki.ss13.eu/index.php/Standard_Operating_Procedure

Plus anything ordered by the heads of staff and the Captain.
« Last Edit: December 18, 2013, 10:37:41 am by wlerin »
Logged
...And no one notices that a desert titan is made out of ice. No, ice capybara in the desert? Normal. Someone kinda figured out the military? Amazing!

Ivefan

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12506 on: December 18, 2013, 10:58:41 am »

Also, Inaction is the action of voluntarily not doing something. Which is fun because that theoretically means that you can voluntarily do things to allow someone else to do harm to a human.
It all depends on interpretation.
I'd really like to hear an explanation for this interpretation. Especially considering your definition of "inaction" is wrong: inaction is simply not doing something. It has nothing to do with choice.
Rather late reply but... certainly, inaction is not doing something, but is it still inaction if the person/AI is not aware of the thing to not do anything about?
If that is the case then we are constantly "doing" inaction because there certainly is some action we would do about something if we just knew about it, yes?

It's for certain in this case, for the law to apply then the AI needs to be aware of the situation that might put the human to harm. If the AI is aware and does not do anything, then it violates the law, unless it has exhausted all options for actions.

All vocabulary has meaning, but the meaning can have deeper interpretations. For example:
I forget the term for this type, but take the word cold. We all understand the basic interpretation that it relates to a temperature that is less than that of our body.
But it describes something that does not exist, for cold is a lack of heat and not something in itself.
Logged

Nienhaus

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12507 on: December 18, 2013, 11:04:32 am »

Hey guys, So I've been sick for almost 2 weeks now. Sorry for not getting on as much. I'm going to the doctors today to find out if I have bronchitis which can last anywhere from 3 weeks to 3 months. So If I'm not on Glloyd I'm sorry, I'll be back sometime.
Logged

Damiac

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12508 on: December 18, 2013, 11:21:10 am »

But... there are 3 laws, not just 1. The AI is letting people into places because a human being ordered the AI to let them in. Now, it's pretty acceptable to deny these requests if there's some reasonable expectation of harm, but in the absence of the possibility of harm, the AI should open the door. Every single time.  The AI does not have laws saying anything about access or authorization.
But the AI does have existing orders that supercede those of Joe Random, both from NT and (sometimes, if they remember to do so) from heads of staff.

This idea that nanotrasen already gave the AI some standing orders is a nice handwave, but where the hell are these pre-existing orders spelled out?
http://wiki.ss13.eu/index.php/Standard_Operating_Procedure

Plus anything ordered by the heads of staff and the Captain.

Well... I don't see a single thing in that link that says anything about the AI refusing access to anyone ever.  Or anything about the AI's standing orders, with the exception that the AI may bolt down high secure areas.  And it doesn't say the AI should refuse any orders to then unbolt those doors.

Also, there's nothing about orders superseding any other orders. Technically, law 2 only says "Follow orders given by humans". So in the case of conflicting orders, the AI's following his laws regardless of which order he follows.  At that point, most AI players would probably choose to follow chain of command, and if they don't, the command staff has a serious issue on their hands.

Also, Inaction is the action of voluntarily not doing something. Which is fun because that theoretically means that you can voluntarily do things to allow someone else to do harm to a human.
It all depends on interpretation.
I'd really like to hear an explanation for this interpretation. Especially considering your definition of "inaction" is wrong: inaction is simply not doing something. It has nothing to do with choice.
Rather late reply but... certainly, inaction is not doing something, but is it still inaction if the person/AI is not aware of the thing to not do anything about?
If that is the case then we are constantly "doing" inaction because there certainly is some action we would do about something if we just knew about it, yes?
It's for certain in this case, for the law to apply then the AI needs to be aware of the situation that might put the human to harm. If the AI is aware and does not do anything, then it violates the law, unless it has exhausted all options for actions.

Well, at least someone seems to see what I'm trying to say.  The words "Through inaction" are a very important part of law 1. 

The really fun thing about it is the lack of symmetry this creates for the AI. The AI cannot "Injure a human".  The AI cannot "Through inaction, allow harm to a human".  "Injure" and "Allow harm" are very different.  The "don't injure humans" rule only says the AI cannot directly injure a human, not that they have to consider what might happen as a result of their actions.  That's why I keep bringing up law 2.  When I say "AI, let me in the armory", the AI is not injuring anyone by opening the door.  Nor is he "through inaction, allowing harm".  Since doing so would not break law 1, and law 2 demands he follow my order, the AI should open the door.  This is true even though the AI has a brain, and the AI probably realizes that letting this assistant in there is dangerous.  But what the AI knows and wants doesn't matter when there's a law in place. 

However, immediately after following that order, the second part of law 1 suddenly becomes pertinent.  There's an unauthorized person in the armory, with harmful weapons! The AI therefore MUST take action to disallow human harm. Because if the AI took no action, he could potentially "Through inaction, allow human harm" 

In other words, asking the AI to open a door is essentially the same as hacking it.  The AI let you in because it had to, you're still not authorized to be there.  And just like if you hacked into the armory, the AI's now responsible to take action to disallow harm.

Note, of course, that all it takes to stop this is literally any crewmember saying "AI, don't let anyone into any areas they're not authorized for".  And asking the AI to let you into a restricted area is just another form of breaking in. 
« Last Edit: December 18, 2013, 11:30:45 am by Damiac »
Logged

miauw62

  • Bay Watcher
  • Every time you get ahead / it's just another hit
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #12509 on: December 18, 2013, 11:47:19 am »

Tsu, an AI is made with a single brain in an MMI.
Logged

Quote from: NW_Kohaku
they wouldn't be able to tell the difference between the raving confessions of a mass murdering cannibal from a recipe to bake a pie.
Knowing Belgium, everyone will vote for themselves out of mistrust for anyone else, and some kind of weird direct democracy coalition will need to be formed from 11 million or so individuals.
Pages: 1 ... 832 833 [834] 835 836 ... 1654