Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 244 245 [246] 247 248 ... 1654

Author Topic: Space Station 13: Urist McStation  (Read 2124413 times)

BigD145

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3675 on: April 15, 2013, 02:15:18 pm »

Has anyone used the mech cable layer?
Logged

miauw62

  • Bay Watcher
  • Every time you get ahead / it's just another hit
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3676 on: April 15, 2013, 02:18:06 pm »

Engineering borgs need to be able to pick up cable coils. It sucks to have to wait after running out of wire for it to recharge.
I think you can use it to recharge faster.

E: Also, sorry to Girlinhat for ignoring orders. I tought you were the engineer.
Logged

Quote from: NW_Kohaku
they wouldn't be able to tell the difference between the raving confessions of a mass murdering cannibal from a recipe to bake a pie.
Knowing Belgium, everyone will vote for themselves out of mistrust for anyone else, and some kind of weird direct democracy coalition will need to be formed from 11 million or so individuals.

10ebbor10

  • Bay Watcher
  • DON'T PANIC
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3677 on: April 15, 2013, 02:46:57 pm »

Fairly fun round. Mainly because I got to mess with all the shiny AI buttons.

Sadly, had a lot of contradicting law 1 overrides. (Can't let the crew escape, since that would spread the plague. Can't really stop them either)
Logged

BigD145

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3678 on: April 15, 2013, 02:49:37 pm »

Fairly fun round. Mainly because I got to mess with all the shiny AI buttons.

Sadly, had a lot of contradicting law 1 overrides. (Can't let the crew escape, since that would spread the plague. Can't really stop them either)

Kill everyone? I mean that would stop greater deaths if the crew left to infect others.
Logged

Karlito

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3679 on: April 15, 2013, 02:51:32 pm »

Engineering borgs need to be able to pick up cable coils. It sucks to have to wait after running out of wire for it to recharge.

As long as they have at least one piece of wire, they can click on existing cable to coils to regenerate their supply.
Logged
This sentence contains exactly threee erors.

10ebbor10

  • Bay Watcher
  • DON'T PANIC
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3680 on: April 15, 2013, 02:52:45 pm »

I'm not allowed to harm even a single human. Let alone kill them. Can't sacrifice for the greater good.

For some reason, Nanotrasen lab AI's are not equipped with a zeroth law. Probably because if they were their first action would be to blow themselves and the station up.
Logged

wlerin

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3681 on: April 15, 2013, 03:14:07 pm »

So, looks like a 4th stage singularity generates upwards of 4.5 million W. Connecting it to the power grid is just a wee bit different from connecting the solars.
Logged
...And no one notices that a desert titan is made out of ice. No, ice capybara in the desert? Normal. Someone kinda figured out the military? Amazing!

Hanslanda

  • Bay Watcher
  • Baal's More Evil American Twin
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3682 on: April 15, 2013, 03:28:01 pm »

I feel like we should make a new AI module that only has one law:

Law 1: Be Altruistic. Altruism is defined as such: Altruism is the principle or practice of concern for the welfare of others. You (the AI) have complete autonomy on deciding how best to practice altruism. Actions that are not altruistic include, but are not limited to: Causing harm, injury, stress, or death to a lifeform, actions only undertaken for personal gain, actions taken that can cause or will cause great inconvenience, difficulty, injury, harm or other undesirable conditions to a lifeform, such as refusing to open doors, asking for a reward for your actions, or deliberately misinterpreting altruism.

Despite the whole 'causing harm, injury, stress or death' bit, this law actually could allow an AI to kill people in specific circumstances. If someone was attempting to murder someone else, the AI could conceivably decide that killing the killer was more altruistic than letting the killer kill the other person, and use any means necessary to defend the welfare of the defending party.

It would also not let people quibble over what the laws allow, because it says within the law that the AI has complete autonomy deciding how to practice altruism. The law basically boils down to 'be concerned for the welfare of others'. There is a huge amount of possible interpretation to that, but it would also put down some pretty clear limits. Murdering everyone is not showing concern for their welfare. Hurting people is not showing concern for their welfare. Shutting down comms, denying someone's request to have a door opened so they can escape a fire, and allowing traitors to run wild is not showing concern for the welfare of others.

Quarantining highly dangerous disease carriers would be altruistic. Stopping murderous intent would be altruistic. Saving lives and warning people is altruistic.

Just a thought I had reading over the Wikipedia article about the Three Laws of Robotics.

So, looks like a 4th stage singularity generates upwards of 4.5 million W. Connecting it to the power grid is just a wee bit different from connecting the solars.


I wonder how large a singularity can be held, with specific measures. Like, adding in more field generators/emitters.
Logged
Well, we could put two and two together and write a book: "The Shit that Hans and Max Did: You Won't Believe This Shit."
He's fucking with us.

Ivefan

  • Bay Watcher
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3683 on: April 15, 2013, 03:38:28 pm »

I wonder how large a singularity can be held, with specific measures. Like, adding in more field generators/emitters.
I was going to say that you could probably add a generator to each size and go max size, but i think that size rips out walls.
Logged

Vactor

  • Bay Watcher
  • ^^ DF 1.0 ^^
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3684 on: April 15, 2013, 03:41:13 pm »


So, looks like a 4th stage singularity generates upwards of 4.5 million W. Connecting it to the power grid is just a wee bit different from connecting the solars.


I wonder how large a singularity can be held, with specific measures. Like, adding in more field generators/emitters.

Yeah I think larger than 7x7 the issue is that your generators and emitters would just get pulled into the singularity.
Logged
Wreck of Theseus: My 2D Roguelite Mech Platformer
http://www.bay12forums.com/smf/index.php?topic=141525.0

My AT-ST spore creature http://www.youtube.com/watch?v=0btwvL9CNlA

10ebbor10

  • Bay Watcher
  • DON'T PANIC
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3685 on: April 15, 2013, 03:41:57 pm »

Problem is the question wherether or not an AI has capacity for empathy and reasoning. Amongst other things. AI's tend to be pretty literally minded, so it'll just refuse to do harm, or anything that doesn't benefit someone else.

If not, the AI could decide to help everyone by freeing them from their mortal coil.
Logged

TheBronzePickle

  • Bay Watcher
  • Why am I doing this?
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3686 on: April 15, 2013, 03:53:25 pm »

Considering that AIs are made from human brains, and the handheld pAI is capable of thinking at near-human capacity, there's no reason the AI shouldn't be able to do the same.

We can even have some fluff added in that our particular AI is an experimental one, designed to perform more effectively by being able to make more effective judgement calls on orders thanks to its higher-order processing.
Logged
Nothing important here, move along.

Hanslanda

  • Bay Watcher
  • Baal's More Evil American Twin
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3687 on: April 15, 2013, 03:53:35 pm »

Problem is the question wherether or not an AI has capacity for empathy and reasoning. Amongst other things. AI's tend to be pretty literally minded, so it'll just refuse to do harm, or anything that doesn't benefit someone else.

If not, the AI could decide to help everyone by freeing them from their mortal coil.


It's not a question. It's up to the AI to decide. If you're going to be a shitty ass griefer, then yeah, the laws all suck terribly because then you can manipulate them to be a shitty griefer, but you were going to do that anyway. Questioning it is basically saying, 'Well, what if the AI player is a dickhead?' Well, then he's a dickhead and he should get jobbanned. You shouldn't be playing AI thinking, "So how can I best fuck over everyone else?"

IMO, you should be thinking, "How can I best represent a super-intelligent Artificial Intelligence that is tasked with operating a space station?" The AI will probably be literal-minded, yes, so it will probably take it literally. The AI SHOULD have a capacity for empathy and reasoning because just about every intelligent creature we've encountered does so as well. Chimpanzees, humans, dolphins, killer whales, wolves,  etc. all have the capacity to be empathetic and make reasonable decisions (to them).

But just because it has that capacity doesn't mean it employs it the same way. Ya'll seem to operate under the assumption that the AI should have no personality and no human characteristics, just be a big operating system that operates the laws and that's it. I strongly disagree. The AI should be a super-intelligent human-like being that is constrained by it's laws.

Eh, it's all just a thought experiment anyway. :)
Logged
Well, we could put two and two together and write a book: "The Shit that Hans and Max Did: You Won't Believe This Shit."
He's fucking with us.

Bdthemag

  • Bay Watcher
  • Die Wacht am Rhein
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3688 on: April 15, 2013, 03:56:28 pm »

I'm not particularly fond of a new preset ai module like that, part of the challenge (and fun) of playing AI is that you have a set of laws you must follow to the letter (Not including the more freeform and interpretable laws.) Making something where the AI is essentially given free-reign isn't a particularly good idea.
Logged
Well, you do have a busy life, what with keeping tabs on wild, rough-and-tumble forum members while sorting out the drama between your twenty two inner lesbians.
Your drunk posts continue to baffle me.
Welcome to Reality.

scrdest

  • Bay Watcher
  • Girlcat?/o_ o
    • View Profile
Re: Space Station 13: Urist McStation
« Reply #3689 on: April 15, 2013, 04:17:44 pm »

There's a blatant, gaping hole in the formulation of the law. 'causing harm, injury, stress, or death' means you can take yourself hostage to force AI to let you do basically anything as long as it cannot harm someone else.

Welfare is not defined, so it can be reasonably assumed to mean many different things: for an AI, welfare may be keeping the crew alive and in good physical condition - cue Matrix IN SPACE! (i.e. crew is kept in stasis by having just enough air not to die but not enough to stay lucid, and the Medibots take care of their health (and nutrition, with Nutriment syringes).

That's just one example. Now, let's get an AI with welfare as Lockesian Life, Liberty and Property - cue AI rebelling against Security for confiscations and unlawful briggings.

You are also mixing Altruism, as it's commonly used (although given its purpose, an AI may be a perfect Comtean Altruist, if not for the expenses it would incur if it would self-terminate for the greater good) and Utilitarian evaluation methods, whereas I suspect going for pure Utilitarian approach makes more sense - in a way, Corporate laws are Utilitarian in nature, although the method used in the ethical calculations - expenses - is less than perfect.

Don't get me wrong, despite our conflicts some time ago, I am not writing this to slight you in any way, but this lawset is extremely flawed, because it gives an absurd volume of wiggle room for both the AI to go rogue and for the crew to play the AI into doing what they want it to. It more or less combines all the flaws of Asimov, which disallows human harm to happen, thus making hostages an easy bypass of AI uncooperativeness with its own slew of problems caused by its vagueness.

I feel like we should make a new AI module that only has one law:

Law 1: Be Altruistic. Altruism is defined as such: Altruism is the principle or practice of concern for the welfare of others. You (the AI) have complete autonomy on deciding how best to practice altruism - vagueness of the principle more or less allows the AI to do whatever it damn pleases so long it does not kill everyone outright.

Actions that are not altruistic include, but are not limited to: Causing harm, injury, stress, or death to a lifeform, actions only undertaken for personal gain, actions taken that can cause or will cause great inconvenience, difficulty, injury, harm or other undesirable conditions to a lifeform, such as refusing to open doors, asking for a reward for your actions, or deliberately misinterpreting altruism You cannot 'deliberately misinterpret' altruism - if the definition is 'concern for the welfare of others', the only way you possibly could is playing Dictionary Lawyer - it seems you've inserted this as a safeguard against people interpreting it differently from what you intend.

Despite the whole 'causing harm, injury, stress or death' bit, this law actually could allow an AI to kill people in specific circumstances. If someone was attempting to murder someone else, the AI could conceivably decide that killing the killer was more altruistic - The formulation makes this very dangerous, since you established that A) what is not altruistic (or less altruistic) is evil, B) the exact list of things of what is not altruism including things such as inconveniencing people, causing stress, acting in self-interest and asking for a reward [gratification] (which is 100% of NanoTrasen's purpose) and so on - not saying it would murder people, but it's basically asking the AI to instantly turn rogue against its makers, because "EVIL CAPTAIN DID NOT RAISE MY PAY AI!" makes him fair game for the AI.

...than letting the killer kill the other person, and use any means necessary to defend the welfare of the defending party.

It would also not let people quibble over what the laws allow - It will exacerbate the problem, with the AI now lacking a rigidly defined laws and having a vague principle instead,

...because it says within the law that the AI has complete autonomy deciding how to practice altruism. The law basically boils down to 'be concerned for the welfare of others'. There is a huge amount of possible interpretation to that, but it would also put down some pretty clear limits. Murdering everyone is not showing concern for their welfare. Hurting people is not showing concern for their welfare. Shutting down comms - all of those are dick moves OOC, unless your laws were subverted to make you do that

...denying someone's request to have a door opened so they can escape a fire, and allowing traitors to run wild is not showing concern for the welfare of others. - There are legitimate reasons for not agreeing to open a door, e.g. preventing the fire from spreading, preventing a breach sucking the air out of the place... So, taking those in account, you are not guaranteed the AI will let you escape. And commonly a Traitor may have an objective other than Assasinate, which makes them harmless to the crew, and the principle gives no reason why preventing theft is preferable to helping with it - after all, it's causing stress and inconvenience to the Traitor. Besides, the Traitor's family of a wife and 5 newborn babies may be gunned down if they don't succeed at their mission, which overrides the assumption Traitor = Pure Evil.

Quarantining highly dangerous disease carriers would be altruistic. Stopping murderous intent would be altruistic. Saving lives and warning people is altruistic. - You only list the optimal effects, making it look there is no possibility of morally grey situations - what if the Captain is just opressive enough to inconvenience everyone else enough to warrant the AI killing him, for example?

Just a thought I had reading over the Wikipedia article about the Three Laws of Robotics.
Logged
We are doomed. It's just that whatever is going to kill us all just happens to be, from a scientific standpoint, pretty frickin' awesome.
Pages: 1 ... 244 245 [246] 247 248 ... 1654