Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 [3] 4 5 ... 21

Author Topic: Killing AI's  (Read 19334 times)

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: Killing AI's
« Reply #30 on: October 12, 2010, 03:47:17 pm »

Uh-huh... human ascension you say?  Or brain transplants into clones?
Nothing so drastic is necessary; Nanotech cellular repair for some bits and possibly replacement for others.

I think the best way to deal with things like that is to legislate "recycling" of artificial intelligences. If you don't want to play the game anymore, the AI you played it with just says goodbye and emails itself back to Valve to be sold again. Simple.
...
...
...
...
...
Computers don't work like that.
Also, this.


We kill animals for our pleasure all the time. I had a delicious animal just this weekend.

The digital AI is not the same. It's just a complex array of code. They're still not sentient, and likely not self-aware.
And therein lies the problem. Human morality systems are for the most part fundamentally broken. They fail to take into account any non-human entities, all while leaving no way of comparing varieties of entities to determine whether they should be treated as human or at least similarly to humans in some way. Our morality has evolved in such a way as to help ensure the survival of our human groups, but doesn't do so great outside of that realm.

Which is why it should be changed to accommodate for this. Too bad most people would probably think them as above an AI, no matter how advanced.

Rights and morality that sufficiently covers AIs would probably be the most controversial and discussed issue of all time.
Don't forget... people will be scared and intimidated by the 'smarter and stronger' AI.
Which is exactly why it may not be that great of an idea to discuss it in public discourse... Remember the 'death panel' healthcare debate? Now imagine they are talking about super-human level AI instead of a single clause which is standard in every insurance plan.
« Last Edit: October 12, 2010, 03:51:35 pm by alway »
Logged

Grakelin

  • Bay Watcher
  • Stay thirsty, my friends
    • View Profile
Re: Killing AI's
« Reply #31 on: October 12, 2010, 03:47:59 pm »

We are above the AI because we are able to learn and grow emotionally. The dog is able to learn and grow.

The AI, on the other hand, does not. It doesn't have the biological capacity for emotion. All AIs are sociopaths.
Logged
I am have extensive knowledge of philosophy and a strong morality
Okay, so, today this girl I know-Lauren, just took a sudden dis-interest in talking to me. Is she just on her period or something?

Schilcote

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #32 on: October 12, 2010, 03:52:47 pm »

We are above the AI because we are able to learn and grow emotionally. The dog is able to learn and grow.

The AI, on the other hand, does not. It doesn't have the biological capacity for emotion. All AIs are sociopaths.

What if it does?

I think the best way to deal with things like that is to legislate "recycling" of artificial intelligences. If you don't want to play the game anymore, the AI you played it with just says goodbye and emails itself back to Valve to be sold again. Simple.
...
...
...
...
...
Computers don't work like that.
Also, this.

What do you mean? A sentient AI could simply send itself back to the folks who made it when you're done with it, thereby preventing it from being killed. Hell, we have nonsentient electronic organisms that do similar things (email worms).
Logged
WHY DID YOU HAVE ME KICK THEM WTF I DID NOT WANT TO BE SHOT AT.
I dunno, you guys have survived Thomas the tank engine, golems, zombies, nuclear explosions, laser whales, and being on the same team as ragnarock.  I don't think something as tame as a world ending rain of lava will even slow you guys down.

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: Killing AI's
« Reply #33 on: October 12, 2010, 03:54:03 pm »

We are above the AI because we are able to learn and grow emotionally. The dog is able to learn and grow.

The AI, on the other hand, does not. It doesn't have the biological capacity for emotion. All AIs are sociopaths.
Er, no. A general AI, at least the early ones, would be based on a human brain; this would include emotion centers of the brain. Hell, we couldn't even remove emotion centers in a human brain-based AI design if we wanted to. AI would feel and have emotions just as we do, which is sorta the whole point of this discussion.
Logged

metime00

  • Bay Watcher
  • Adequate Dwarf Fortresser
    • View Profile
Re: Killing AI's
« Reply #34 on: October 12, 2010, 04:00:48 pm »

Don't forget... people will be scared and intimidated by the 'smarter and stronger' AI.

Some people are scared and intimidated by black people. People are afraid of spiders. People fear the unknown.

That doesn't make them bad. The fear of AIs is the entire reason to preplan for AI rights, because without a plan, humanity will be flailing around to figure out what to do with these new AIs and all the inevitable chaos and terror that will follow their creation.
Logged
Live long if you can, and prosper by any means necessary.  Any means, Urist.  So pull that lever, or by Armok, I'll lock you outside come next siege.
He who plays with dwarves must take care that he does not become a dwarf.  And when you stare into DwarfFort, Dwarffort stares back into you.

Grakelin

  • Bay Watcher
  • Stay thirsty, my friends
    • View Profile
Re: Killing AI's
« Reply #35 on: October 12, 2010, 04:00:48 pm »

It can't. Our emotions (looking at this in a non-spiritual way, since that brings us in another direction entirely) are managed by enzymes and a nervous system which sends sensations through our body. The AI does not get this. If somebody coded an emotional system in the AI, they would be hard pressed to even slightly emulate what living organisms feel. And it would be just that: Emulation. Which is what living organisms with sociopathy do all the time.

AI does not have emotions. It is not really alive.
Logged
I am have extensive knowledge of philosophy and a strong morality
Okay, so, today this girl I know-Lauren, just took a sudden dis-interest in talking to me. Is she just on her period or something?

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: Killing AI's
« Reply #36 on: October 12, 2010, 04:02:15 pm »

AI does not have emotions. It is not really alive.
[citation needed]
Logged

metime00

  • Bay Watcher
  • Adequate Dwarf Fortresser
    • View Profile
Re: Killing AI's
« Reply #37 on: October 12, 2010, 04:03:09 pm »

It can't. Our emotions (looking at this in a non-spiritual way, since that brings us in another direction entirely) are managed by enzymes and a nervous system which sends sensations through our body. The AI does not get this. If somebody coded an emotional system in the AI, they would be hard pressed to even slightly emulate what living organisms feel. And it would be just that: Emulation. Which is what living organisms with sociopathy do all the time.

AI does not have emotions. It is not really alive.

This is why I believe the Turing test is the most reliable for judging intelligence. If an AI's end result is indistinguishable from a human, it is essentially the same. Any one of us could be easily emulating emotions and humanity, does it really affect anything whether or not we are actually feeling it?
Logged
Live long if you can, and prosper by any means necessary.  Any means, Urist.  So pull that lever, or by Armok, I'll lock you outside come next siege.
He who plays with dwarves must take care that he does not become a dwarf.  And when you stare into DwarfFort, Dwarffort stares back into you.

Leafsnail

  • Bay Watcher
  • A single snail can make a world go extinct.
    • View Profile
Re: Killing AI's
« Reply #38 on: October 12, 2010, 04:03:20 pm »

Well, if you can create sensors for a robot to feel stuff, you could just make artificial inputs for those sensors.

Of course, it depends whether you agree such a robot can be made.
Logged

Impl0x

  • Bay Watcher
  • ... ... ...PLAY DWARF FORTRESS!!
    • View Profile
Re: Killing AI's
« Reply #39 on: October 12, 2010, 04:05:52 pm »

Er... That's a terrible idea, unless you want SkyNet et. al. Humans aren't desensitized to death, we just avoid thinking about it. If there was a tiny switch that could turn off death, we would flip it, and we would viciously (and IMO rightfully) fight anyone to the death that was trying to prevent us from flipping it. Any AI at this level will be better off going to a psychologist than an AI specialist for 'maintenance', anyway.
I would argue that humans ARE at least partly desensitized to death. At some point in their lives, people eventually come to accept the fact that their time is very limited. But the point I was more getting at is that the death of a member of an otherwise immortal AI "race" would seem unthinkable in the point of view of an AI. I kinda like the idea of mortal AI's. I don't think people would appreciate the thought of creating a sentient entity that would outlive it's creator.
Logged
You know what I am? I'm a dog chasing cars. I wouldn't know what to do with one if I CAUGHT it! See, I just DO things...

Schilcote

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #40 on: October 12, 2010, 04:08:05 pm »

Er... That's a terrible idea, unless you want SkyNet et. al. Humans aren't desensitized to death, we just avoid thinking about it. If there was a tiny switch that could turn off death, we would flip it, and we would viciously (and IMO rightfully) fight anyone to the death that was trying to prevent us from flipping it. Any AI at this level will be better off going to a psychologist than an AI specialist for 'maintenance', anyway.
I would argue that humans ARE at least partly desensitized to death. At some point in their lives, people eventually come to accept the fact that their time is very limited. But the point I was more getting at is that the death of a member of an otherwise immortal AI "race" would seem unthinkable in the point of view of an AI. I kinda like the idea of mortal AI's. I don't think people would appreciate the thought of creating a sentient entity that would outlive it's creator.

Dr. Soong and his creations disagree. I forget the exact episode, but they discussed a lot of this in TNG with Data and Lore.
That's one of the distinguishing characteristics of Star Trek actually, it's very moral-centric. Like the Twilight Zone.
Logged
WHY DID YOU HAVE ME KICK THEM WTF I DID NOT WANT TO BE SHOT AT.
I dunno, you guys have survived Thomas the tank engine, golems, zombies, nuclear explosions, laser whales, and being on the same team as ragnarock.  I don't think something as tame as a world ending rain of lava will even slow you guys down.

metime00

  • Bay Watcher
  • Adequate Dwarf Fortresser
    • View Profile
Re: Killing AI's
« Reply #41 on: October 12, 2010, 04:11:01 pm »

Er... That's a terrible idea, unless you want SkyNet et. al. Humans aren't desensitized to death, we just avoid thinking about it. If there was a tiny switch that could turn off death, we would flip it, and we would viciously (and IMO rightfully) fight anyone to the death that was trying to prevent us from flipping it. Any AI at this level will be better off going to a psychologist than an AI specialist for 'maintenance', anyway.
I would argue that humans ARE at least partly desensitized to death. At some point in their lives, people eventually come to accept the fact that their time is very limited. But the point I was more getting at is that the death of a member of an otherwise immortal AI "race" would seem unthinkable in the point of view of an AI. I kinda like the idea of mortal AI's. I don't think people would appreciate the thought of creating a sentient entity that would outlive it's creator.

But the creation of rights for AIs isn't to make something people appreciate, it's to give them rights that they, as sentient or even intelligent beings, would deserve.

Making the AIs inherently mortal out of some need to be more powerful than them would also be wrong. Like a father cutting an arm off his child so that he can be in control even when the child is an adult and the father is an old man.

And star trek and the twilight zone do deal with things like this, and quite well, actually. The New Years Sci Fi channel twilight zone marathon can't come soon enough.
Logged
Live long if you can, and prosper by any means necessary.  Any means, Urist.  So pull that lever, or by Armok, I'll lock you outside come next siege.
He who plays with dwarves must take care that he does not become a dwarf.  And when you stare into DwarfFort, Dwarffort stares back into you.

Sir Pseudonymous

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #42 on: October 12, 2010, 04:12:16 pm »

A machine would not feel pain or fear for the end of its existence unless some dumbshit designed it to in the first place. Neither of those things, specifically as experienced by living things, is relevant to the function of a machine. If it must avoid/repair damage to itself, there are simpler ways of doing so than creating an artificial philosophical crisis for it. Strong AI wouldn't be an artificial lifeform, it would be a machine imitating one very well. Unless programmed otherwise, which would be rather silly, not to mention significantly harder.

What do you mean? A sentient AI could simply send itself back to the folks who made it when you're done with it, thereby preventing it from being killed. Hell, we have nonsentient electronic organisms that do similar things (email worms).
...
...
...
...
...
Computers really don't work that way, at all.


It wouldn't be like a rented movie, it would be a copy of a program designed to imitate intelligence, including the illusion of emotions and whatnot for the benefit of the user. It would exist only when being executed by the processor, and wouldn't have any manner of existential crisis about not being run again unless someone, for whatever incomprehensibly stupid reason, designed it to. See above.
Logged
I'm all for eating the heart of your enemies to gain their courage though.

metime00

  • Bay Watcher
  • Adequate Dwarf Fortresser
    • View Profile
Re: Killing AI's
« Reply #43 on: October 12, 2010, 04:21:14 pm »

But the entire function of a strong AI is to do more than the developer intended. If it simulated a lifeform, it would also simulate the existential crisis a lifeform would have in the situation.
Logged
Live long if you can, and prosper by any means necessary.  Any means, Urist.  So pull that lever, or by Armok, I'll lock you outside come next siege.
He who plays with dwarves must take care that he does not become a dwarf.  And when you stare into DwarfFort, Dwarffort stares back into you.

Grakelin

  • Bay Watcher
  • Stay thirsty, my friends
    • View Profile
Re: Killing AI's
« Reply #44 on: October 12, 2010, 04:23:37 pm »

I lol'd on the guy asking me to cite because I said something that he couldn't actually argue against. Nobody in this thread is doing any citing. Maybe just google, or read a book? I've done the latter!

AIs do not have emotions. Intellectualism doesn't make them alive. The computers can already crunch numbers way better and faster than we can. They are, technically, 'smarter' than we are, by a grade school definition. But we have no problem with burning out our CPUs with advanced games and prolonged usage. Why? Because they're not alive. They're just a series of code designed with actions and reactions. If the computer is faced with something that the designer had not thought of during its creation, it will be unable to react. It won't contemplate and try to find a new solution.

It also won't cry if you, as its friend, dies. Unless the designer told it to. In which case, it isn't really showing remorse.
Logged
I am have extensive knowledge of philosophy and a strong morality
Okay, so, today this girl I know-Lauren, just took a sudden dis-interest in talking to me. Is she just on her period or something?
Pages: 1 2 [3] 4 5 ... 21