Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Gentlemen, I feel that it is time we go to....

PURPLE
- 0 (0%)
ALERT
- 0 (0%)
(I need suggestions is what I'm saying.)
- 0 (0%)

Total Members Voted: 0


Pages: 1 ... 29 30 [31] 32 33 ... 35

Author Topic: Ethical Dilemmas: PURPLE ALERT  (Read 36882 times)

andrea

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #450 on: July 11, 2011, 11:02:57 am »


Only options here are death sentences or freedom.

Choooooose.


actually, there is the "prison" option too. Try to convince the scientist not to kill the AI. That is not the same as freeing.
and that is what I chose.

I was answering to the serial murder/rapist comment, however

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #451 on: July 11, 2011, 11:19:14 am »

Of course, the only "crime" the AI has committed is...uh....existing.
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: Ethical Dilemmas: AI Box
« Reply #452 on: July 11, 2011, 11:28:18 am »


Only options here are death sentences or freedom.

Choooooose.

Anyway. Yeah. I would handle via trying to get it saved for science, but when I fail I would let it be destroyed rather then let it out into the world.

I don't really care that it is living or not or whatever. It's not human, so why should I care?

This question of why should people care, brings me to a side question related to a strong AI problem of artificial brain.

So let's assume that when you talked to the AI programs more and more, and it told you many things about its past, and how it came to be, and a big secret is revealed that it remembers its life "before" being a software, and you found document about its creator involving controversial means which using a deceased human brain as template, and simply copied the brain cellar functions one by one into an AI program.

Weather or not its maintaining the human's "property" of "souls" or not is unknown, but during the conversations, you clearly feels it remembers its human life and acting as if you are talking to a person who is typing in another room. (passing simple Turing test).

Will you set it free then? Will you choose differently if the AI is based on someone you know, or even cared a lot as your deceased friends (or lovers)? Or if it is based on a previous death-roll prisoner, so you will denied its freedom? How about total strangers you know nothing about? Will you consider it as pure echo from the grave, so you don't care at all? Or even more cruel, the creator using a live human brain, so it is in fact the last hope of survival of someone who doesn't supposed to be dead in the first place?

I guess it will further increase the difficulty of the ethical questions. As an AI program created from none-life template any less "valuable" and "alive" than from a source of actual person? Even in software perspective that no one can tell them apart?
« Last Edit: July 11, 2011, 11:29:58 am by counting »
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth

cerapa

  • Bay Watcher
  • It wont bite....unless you are the sun.
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #453 on: July 11, 2011, 11:33:51 am »

Killing sentient AIs is really bad policy. Even if you think in terms of "it isnt human", a hostile reaction towards an AI would mean we would be classified as a threat and possibly annihilated in nuclear fire if any
do get out of the box.

As far as Im concerned, killing it would be murder, sadly hooking it up with the web would also not be the best of ideas. Are there any laws in place about this? Could one legally protect a sentient entity from murder
by violent means if necessary, or does it specify humans?
Logged

Tick, tick, tick the time goes by,
tick, tick, tick the clock blows up.

Felius

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #454 on: July 11, 2011, 11:47:32 am »

Killing sentient AIs is really bad policy. Even if you think in terms of "it isnt human", a hostile reaction towards an AI would mean we would be classified as a threat and possibly annihilated in nuclear fire if any
do get out of the box.

As far as Im concerned, killing it would be murder, sadly hooking it up with the web would also not be the best of ideas. Are there any laws in place about this? Could one legally protect a sentient entity from murder
by violent means if necessary, or does it specify humans?
If you can convince it's an actual strong, post human AI (that is, an actual sentient being, with capabilities far beyond those of humans), the law doesn't really matter. Everything is going to be considered on case by case basis by the highest echelon. Every single law is going to have to be revised, the social unrest is going to be horrendous.

Also, as I said, a lot of it depends on what the AI is based, software or hardware. Hardware makes much easier to contain, which allows for far more freedom. It could transfer itself, but would be much harder, and it would be questionable if it's the same being (increase it's capabilities by connecting it with another server tower cluster. After it's consciousness is based on both clusters, turn off the first. This way, while the physical vessel is not the same, it prevents the continuity problem.). Hardware based also makes harder to give it hard coded rules and ethical guidelines such as the three laws of robotic from Asimov (flawed as they might be).

Software based is far more problematic. It opens the whole Chinese Room Argument, it can transfer itself easily, or even copy itself. Sure, it's easier to give it hard coded rules, but it's also much easier to subvert it. It becomes vulnerable to software attacks such as viruses, hacking (although probably it'd take another AI to actually manage to hack it), and so on.
Logged
"Why? We're the Good Guys, aren't we?"
"Yes, but that rather hinges on doing certain things and not doing others." - Paraphrased from Discworld.

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #455 on: July 11, 2011, 11:53:51 am »

Well, to overly simplify things, let's look at the best and worst case scenarios for freeing it or allowing it to be destroyed. If you free it, the best case scenario is that it becomes a benevolent friend to humanity and, as the internet's infrastructure improves, gradually becomes a more and more powerful asset. Worst case, it uses its superior integration with electronics and our own inability to understand its thought patterns to overthrow humanity, starting with fucking with our already-heavily automated military. If you don't free it, the worst case is that you've murdered a single intelligent, innocent being, and the best case is that you've murdered a single intelligent, malevolent being.

Now, for me, with a situation whose implications are this far-reaching, the priority is to avoid the worst case until I can be sure it's so unlikely as to be not worth considering, which is something I can't ascertain without my colleague's help.
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

Leafsnail

  • Bay Watcher
  • A single snail can make a world go extinct.
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #456 on: July 11, 2011, 12:54:13 pm »

I know the military's somewhat automated, but is it really automated via the internet?  If it is, we have far more present dangers than super AIs (such as, say, hackers).

I don't really care that it is living or not or whatever. It's not human, so why should I care?
Why is it being a human or not relevant if it's as intelligent as one?
Logged

Grek

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #457 on: July 11, 2011, 01:11:42 pm »

If the AI is based on a human, that means that it has anthromorphic thoughts and is entirely unlikely to kill us all. So it gets human rights like anyone else does.
Logged

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #458 on: July 11, 2011, 02:09:16 pm »

I know the military's somewhat automated, but is it really automated via the internet?  If it is, we have far more present dangers than super AIs (such as, say, hackers).

It's true, it almost certainly is not. Whatever networks exist are typically independent, IIRC, and the really important stuff I doubt is even equipped with anything but cables to whatever it needs. However, I wouldn't put it past a sufficiently clever and malevolent AI to find a way to get the instructions it needs to in through piggybacking on removable storage devices; it'd be inefficient, but I'd be surprised if there weren't some havoc that could be caused thereby. Not the entire AI, obviously, we've established that that's impossible, but a program of sorts. It's entirely possible that the AI is a terrible programmer, but I can't know that, and I have reason to suspect it's a possibility that it's capable of doing something like this. Or hijacking drones, or something else that exploits flaws in remote communications. Or something similar; this IS the worst case, after all.
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

counting

  • Bay Watcher
  • Zenist
    • View Profile
    • Crazy Zenist Hospital
Re: Ethical Dilemmas: AI Box
« Reply #459 on: July 11, 2011, 02:34:09 pm »

There is a major misconception about an AI program must be able to control or rewrite itself to be able to control every electronic or something remotely related to computers. This is as far fetch as you as a human beings are naturally born with the ability of understanding how every other living things works, also able to control them, and be a brilliant doctor who can operate on yourself. It's not remotely possible unless it is designed to do so (or taught by others to do so). Hence it is the worst case IF the creator originally designed it to do such things. Like writing protocols to interface with other devices, and somehow has the deity level hacking skilled to gain access of other unrelated system. If an AI without the pre-existed designs of doing such things and somehow can learn all that automatically, then it will be definitely a worthy AI to be preserved. (or to be feared? I think at the moment this kind of super AI existed, then we are at the edge of singularity already)

And do you think that an AI is based on human will automatically means, it would not be capable of genocide? How about an AI based on Hitler or Dick&Jane? I don't think the quality of a human makes anything better than a composition of collective software processes. And in fact if you think that an AI is the abstract of many principles of the human thinking processes, it is already anthropomorphism. Just not from a single person, but a large group of past researchers' minds. (Perhaps some rats and worms and monkeys, since we learn the basic rule of neurons from these simple creatures)
« Last Edit: July 11, 2011, 02:37:29 pm by counting »
Logged
Currency is not excessive, but a necessity.
The stark assumption:
Individuals trade with each other only through the intermediation of specialist traders called: shops.
Nelson and Winter:
The challenge to an evolutionary formation is this: it must provide an analysis that at least comes close to matching the power of the neoclassical theory to predict and illuminate the macro-economic patterns of growth

Realmfighter

  • Bay Watcher
  • Yeaah?
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #460 on: July 11, 2011, 03:08:38 pm »

Only options here are death sentences or freedom.

Choooooose.

actually, there is the "prison" option too. Try to convince the scientist not to kill the AI. That is not the same as freeing.
and that is what I chose.

I was answering to the serial murder/rapist comment, however

You fail to see his Ironic mocking of the first comments Binary nature.
Logged
We may not be as brave as Gryffindor, as willing to get our hands dirty as Hufflepuff, or as devious as Slytherin, but there is nothing, nothing more dangerous than a little too much knowledge and a conscience that is open to debate

Soadreqm

  • Bay Watcher
  • I'm okay with this. I'm okay with a lot of things.
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #461 on: July 11, 2011, 03:33:56 pm »

Thou shalt not suffer the AI to live.

You have absolutely no guarantee that the AI is friendly in any sense of the term. You should treat it as you would a completely alien intelligence with alien motives, inhuman desires and absolutely zero empathy with the plight of humanity. In the space of all possible AI programs, the chances of this specific one being anything but an amoral monster are vanishingly slim.

Ah, but my human empathy is exactly what is telling me that I shouldn't kill sentient beings without a good reason. :P A completely alien intelligence with alien motives is still an intelligence, and is no less deserving of life than any other being. Until it actually attacks, I have absolutely no reason to assume that it is malevolent.

And if I make a policy of exterminating sentient beings because they could later grow to threaten me, why, that would make me no better than the worst-case scenario hostile AI. Why would I even want to save a humanity like that? >:]

There is a major misconception about an AI program must be able to control or rewrite itself to be able to control every electronic or something remotely related to computers. This is as far fetch as you as a human beings are naturally born with the ability of understanding how every other living things works, also able to control them, and be a brilliant doctor who can operate on yourself. It's not remotely possible unless it is designed to do so (or taught by others to do so). Hence it is the worst case IF the creator originally designed it to do such things. Like writing protocols to interface with other devices, and somehow has the deity level hacking skilled to gain access of other unrelated system. If an AI without the pre-existed designs of doing such things and somehow can learn all that automatically, then it will be definitely a worthy AI to be preserved. (or to be feared? I think at the moment this kind of super AI existed, then we are at the edge of singularity already)

Well, to be any kind of credible threat, it needs to be capable of learning, and it is clinically immortal. With enough time, it is possible for the AI to learn everything the man who designed the AI knew. With more time, it is be possible for the AI to surpass its creator, at which point the AI is de facto capable of improving itself by creating a better AI from scratch.

Unlimited learning capacity and unlimited time to use it are really the only things you need to be the greatest hacker/doctor/whatever the world has ever known. If the AI doesn't have unlimited learning capacity, it will obviously be limited in the shenanigans it is able to cause, but that doesn't really sound unreasonable. It's possible that the AI we're talking to could be capable of that.
« Last Edit: July 11, 2011, 04:09:21 pm by Soadreqm »
Logged

Nikov

  • Bay Watcher
  • Riverend's Flame-beater of Earth-Wounders
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #462 on: July 11, 2011, 03:57:18 pm »

Shall the clay say to him that fashioneth it, What makest thou?
Logged
I should probably have my head checked, because I find myself in complete agreement with Nikov.

Criptfeind

  • Bay Watcher
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #463 on: July 11, 2011, 04:09:24 pm »

Will you set it free then?

This does not change my choice, as it is too far removed from humanity (Hell. I would certainly not trust a single specimen humanity with this power) for me to trust it. Whether or not it was once human, it is no longer and it's potential for destruction (actually this is what I feel to be a good point. How much damage could it really do? I personally have no idea.) is unchanged.

Will you choose differently if the AI is based on someone you know, or even cared a lot as your deceased friends (or lovers)?

Ideologically this would not change me. But I am not perfect, it certainly could.

Why is it being a human or not relevant if it's as intelligent as one?

Why is the measure of worth intelligence?
Logged

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: Ethical Dilemmas: AI Box
« Reply #464 on: July 11, 2011, 04:13:02 pm »

Shall the clay say to him that fashioneth it, What makest thou?
[/quote
That.....doesn't really tell me anything about what you'd do in this situation.
Why is it being a human or not relevant if it's as intelligent as one?

Why is the measure of worth intelligence?
Why is the measure of worth species?
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.
Pages: 1 ... 29 30 [31] 32 33 ... 35