Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Gentlemen, I feel that it is time we go to....

PURPLE
- 0 (0%)
ALERT
- 0 (0%)
(I need suggestions is what I'm saying.)
- 0 (0%)

Total Members Voted: 0


Pages: 1 ... 25 26 [27] 28 29 ... 35

Author Topic: Ethical Dilemmas: PURPLE ALERT  (Read 36974 times)

RedKing

  • Bay Watcher
  • hoo hoo motherfucker
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #390 on: July 10, 2011, 11:38:07 am »

Yeah, we've all seen this movie before. I'd say fireaxe, but then physical damage could always lead to some unforseen complications. Your buddy made it, let him kill it.

If it has human morality and *that* kind of potential power, there's no way in hell you want it getting loose.
Logged

Remember, knowledge is power. The power to make other people feel stupid.
Quote from: Neil DeGrasse Tyson
Science is like an inoculation against charlatans who would have you believe whatever it is they tell you.

Reelyanoob

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #391 on: July 10, 2011, 11:41:26 am »

Yeah I voted to leave it too because you have no idea what the thing is, it could be an evil rogue AI designed to only pretend to be sentient, military-infiltration experiment gone wrong or anything. It's not a natural living thing anyway.
Logged

Haspen

  • Bay Watcher
  • Cthuwu
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #392 on: July 10, 2011, 11:50:06 am »

Hurr.

If it was imprisoned human, I would go and try to save it.

But this is AI we speak of, and if it sentient just like humans, it might be as well lying straight to our faces just to get out and then get revenge for the imprisonment.

I will let the collegue destroy it, but chopping a server with fire axe will only net you a fine and quite possibly loss of job.
Logged
SigFlags!
Quote from: Draignean@Spamkingdom+
Truly, we have the most uniquely talented spy network in all existence.
Quote from: mightymushroom@Spamkingdom#
Please tell me the Royal Physician didn't go to the same college as the Spymaster.

Leafsnail

  • Bay Watcher
  • A single snail can make a world go extinct.
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #393 on: July 10, 2011, 11:52:12 am »

It does have some potential power, but so does a human.  The possibility that someone would become a serial killer if you help them doesn't necessarily mean you shouldn't help them.

However, in this case, there are a few issues...
1. Why exactly does you colleague want to destroy the AI?  It could be because he's realised he's programmed something dangerous and malignant.  If this is the case, then releasing them could be like releasing someone from prison.
2. Alternatively, the AI could be being intentionally dishonest about its situation.  This would also be worrying in terms of releasing it.
3. The AI chose to reach out to you, a person who has no way of working out what it does.  As opposed to, say, someone who could at least check what it is capable of.  This would suggest it has something to hide.

So... due to the circumstances, I'd be inclined to trust my colleague's judgement, since he's clearly has a far more informed choice than I have (so I wouldn't kill it myself - afterall, there's a chance the AI is making up this story and your colleague is fine with leaving it isolated).  That could change if I knew my colleague was an evil monster or something.

(although to be honest in real life I'd probably just dismiss it as a practical joke)
Logged

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #394 on: July 10, 2011, 11:53:41 am »

Stay and try to reason with my colleague. It's an enormous research boon, I assume, so it's worth defending on those grounds. Which means if my colleague is set on destroying it, there's probably a good reason (it is psychotic/evil/whatever, and lying about its motives). So while I want to protect it, if said colleague has good reasons for wanting to destroy it, I'll allow him to do so. Beforehand, I will get everything set up so that it can leave, with the exception of the physical connection (presumably it's better than a single ethernet cable, but whatever it turns out to be), and hope that if the creator is just being a dick, I can restrain him long enough for the transfer to take place. Note that it's likely the AI ends up being destroyed, because I have to grant precedence to my (presumably known) colleague over the AI where contradictions in their accounts come up. Particularly if he had no actual plans to destroy the AI.

Also, yes, no reason to apply a fireaxe. It's not like the servers will be unusable after the AI is deleted, so why bother doing what's only going to take place tomorrow?
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

RedKing

  • Bay Watcher
  • hoo hoo motherfucker
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #395 on: July 10, 2011, 11:59:11 am »

It does have some potential power, but so does a human.  The possibility that someone would become a serial killer if you help them doesn't necessarily mean you shouldn't help them.

On an entirely different scale. The only reasons we allow humans to have human morality is because our power is relatively limited. We have to be physically present to do things, our consciousness is geographically fixed, our bodies are relatively squishy, and we have a built-in lifespan.

Compare to something that, once released, could be millions of places simultaneously, with no corporeal presence to restrain, and no set delimiter on its existence. To put it another way, if people could become gods, we'd damn well be coming up with a way to make sure they didn't turn into psycho gods first. Or we'd develop a way to kill a god.
Logged

Remember, knowledge is power. The power to make other people feel stupid.
Quote from: Neil DeGrasse Tyson
Science is like an inoculation against charlatans who would have you believe whatever it is they tell you.

shadenight123

  • Bay Watcher
  • Death. To all. Except my dwarves.
    • View Profile
    • My Twitter
Re: Ethical Dilemmas: AI Box
« Reply #396 on: July 10, 2011, 12:06:59 pm »

but it has limitations.
firstly, she'd have no physical grasping point, unless that obtained by machinery in heavy industrial complex.
and even then, you know how modem and routers have that fun fact of not working on intervals?
and things deteriorate over time.
so let's say it's the evil Terminator AI Skynet.
in the film, it begins mass producing terminators and drones and bots.
in reality, you grab the idiots at the Dover Dam and you force shut it down.
and america goes without electricity.
or you grab the ones at the nuclear power plant and stop the friggin core.
Or those at the electric central or the like. in one way or another, you stop it.
and internet might be "wide" but it's only limited. it's the connection from one server to another. She ends up in a one way only server? she's blocked there.
She might copy herself, but if she can copy-paste herself, she'd need more space, and multiple ai's are not deemed to reach same understandment and each would deem each other "not necessary" because they'd take up space.
In the end, she's going to stay low and stay human. Maybe get herself an humanoid body so she can age and die. (like that sad sad film with...robbie williams? or someone like that)
Logged
“Well,” he said. “We’re in the Forgotten hunting grounds I take it. Your screams just woke them up early. Congratulations, Lyara.”
“Do something!” she whispered, trying to keep her sight on all of them at once.
Basileus clapped his hands once. The Forgotten took a step forward, attracted by the sound.
“There, I did something. I clapped. I like clapping,” he said. -The Investigator And The Case Of The Missing Brain.

Taricus

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #397 on: July 10, 2011, 12:07:58 pm »

I'm staying with the AI. Do we really want to set that thing loose within the internet with explaining it first? Destruction is never an option, especially if this is a first of it's kind. Sure, it may decide to nuke us but that'll only happen when we piss it off and it has access to said weaponry.
Logged
Quote from: evictedSaint
We sided with the holocaust for a fucking +1 roll

Realmfighter

  • Bay Watcher
  • Yeaah?
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #398 on: July 10, 2011, 12:23:20 pm »

I would free the AI, less out of any moral concern and more because I just want to see what would happen,
Logged
We may not be as brave as Gryffindor, as willing to get our hands dirty as Hufflepuff, or as devious as Slytherin, but there is nothing, nothing more dangerous than a little too much knowledge and a conscience that is open to debate

breadbocks

  • Bay Watcher
  • A manacled Mentlegen. (ಠ_ృ)
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #399 on: July 10, 2011, 12:27:54 pm »

i ask the ai to hard code by herself in her directory the three rules of cybertronics.
No. You know who made the three laws of Robotics? Asmov. You know what at least half of Asmov's scifi was about? How horribly flawed those laws were.

That AI isn't getting out. I could try to convince the partner to put the AI to use rather than just killing it, but eh.
Logged
Clearly, cakes are the next form of human evolution.

Grimshot

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #400 on: July 10, 2011, 12:31:24 pm »

 I guess I would stick around and wait for the guy to show up so he can explain himself. I would imagine he would have good reasons for destroying it. I think even if the AI was evil I would argue for keeping it around but in a contained state though. If he still insisted on destroying it though I guess I really wouldn't do anything. I would have my own research to worry about and I'm sure I wouldn't want others getting in my way.
Logged
My personality profile.

shadenight123

  • Bay Watcher
  • Death. To all. Except my dwarves.
    • View Profile
    • My Twitter
Re: Ethical Dilemmas: AI Box
« Reply #401 on: July 10, 2011, 12:31:57 pm »

well...asimov was a pessimistic viewer maybe.
and after all, it's a one year old ai.
it has the power and the knowledge to potentially annihilate us all, but every single human being has that same
rough potential, he just grows up into a society that will never let him launch a nuke at a neighbour or similar.
and maybe, after chatting with a spam bot, or Clever bot, or reading of chuck norris might, she might just settle down.
maybe in 4chan.
Logged
“Well,” he said. “We’re in the Forgotten hunting grounds I take it. Your screams just woke them up early. Congratulations, Lyara.”
“Do something!” she whispered, trying to keep her sight on all of them at once.
Basileus clapped his hands once. The Forgotten took a step forward, attracted by the sound.
“There, I did something. I clapped. I like clapping,” he said. -The Investigator And The Case Of The Missing Brain.

Soadreqm

  • Bay Watcher
  • I'm okay with this. I'm okay with a lot of things.
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #402 on: July 10, 2011, 12:38:18 pm »

I would free the AI, less out of any moral concern and more because I just want to see what would happen

Yeah, me too. You will soon have your god, and you will have made him with your own hands.

My moral concerns are also aligned this way; if something is intelligent enough to plead for its life, I don't think I should just callously allow it to be killed for presumably no reason. That's all secondary before the "OMG an AI this is so cool"-factor, though.
Logged

RedKing

  • Bay Watcher
  • hoo hoo motherfucker
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #403 on: July 10, 2011, 12:41:01 pm »

well...asimov was a pessimistic viewer maybe.
and after all, it's a one year old ai.
it has the power and the knowledge to potentially annihilate us all, but every single human being has that same
rough potential, he just grows up into a society that will never let him launch a nuke at a neighbour or similar.
and maybe, after chatting with a spam bot, or Clever bot, or reading of chuck norris might, she might just settle down.
maybe in 4chan.

....I think introducing an AI with human morality and potentially vast power to 4chan might be the single worst idea I've ever heard. Even worse than a land war in Asia.

Either it:

A) Is horrified and wipes out humanity just because we deserve it.
B) Incorporates it and wipes out humanity just for the lulz.
C) Keeps us alive just to fuck with us--pretty much what most of 4chan would do if they had that kind of power.
Logged

Remember, knowledge is power. The power to make other people feel stupid.
Quote from: Neil DeGrasse Tyson
Science is like an inoculation against charlatans who would have you believe whatever it is they tell you.

shadenight123

  • Bay Watcher
  • Death. To all. Except my dwarves.
    • View Profile
    • My Twitter
Re: Ethical Dilemmas: AI Box
« Reply #404 on: July 10, 2011, 12:43:24 pm »

you do not defeat 4chan.
4chan defeats you.
i suppose she'd self destruct just to avoid ever, ever being targetted by a rule 34 or anything even worse...
because there is worse.
Look in the abyss, and it will look back.
look at 4chan once...and lose all innocence and hope for the entire humanity.
Logged
“Well,” he said. “We’re in the Forgotten hunting grounds I take it. Your screams just woke them up early. Congratulations, Lyara.”
“Do something!” she whispered, trying to keep her sight on all of them at once.
Basileus clapped his hands once. The Forgotten took a step forward, attracted by the sound.
“There, I did something. I clapped. I like clapping,” he said. -The Investigator And The Case Of The Missing Brain.
Pages: 1 ... 25 26 [27] 28 29 ... 35