Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 2 3 [4] 5 6 ... 21

Author Topic: Killing AI's  (Read 19528 times)

redacted123

  • Bay Watcher
    • View Profile
-
« Reply #45 on: October 12, 2010, 04:23:57 pm »

-
« Last Edit: June 25, 2017, 10:23:03 am by Stany »
Logged

metime00

  • Bay Watcher
  • Adequate Dwarf Fortresser
    • View Profile
Re: Killing AI's
« Reply #46 on: October 12, 2010, 04:24:57 pm »

I lol'd on the guy asking me to cite because I said something that he couldn't actually argue against. Nobody in this thread is doing any citing. Maybe just google, or read a book? I've done the latter!

AIs do not have emotions. Intellectualism doesn't make them alive. The computers can already crunch numbers way better and faster than we can. They are, technically, 'smarter' than we are, by a grade school definition. But we have no problem with burning out our CPUs with advanced games and prolonged usage. Why? Because they're not alive. They're just a series of code designed with actions and reactions. If the computer is faced with something that the designer had not thought of during its creation, it will be unable to react. It won't contemplate and try to find a new solution.

Nobody is saying that the current machines are self aware or anything more than machines. At this point in time there are no moral ramifications to doing anything to computers or computer programs.

It also won't cry if you, as its friend, dies. Unless the designer told it to. In which case, it isn't really showing remorse.

Why do you show remorse? Is it something you did or did the base functions of your brain tell you to? Remorse and emotions are in everybody's brain. The "designer" of human beings and other animals that show remorse told it to, whether you think the designer is of theological origins or otherwise. Human beings are just as pre-programmed as our theoretical AIs would be.

EDIT: Dang, guy below me ninja'd what I was going to say. High five. :D
« Last Edit: October 12, 2010, 04:30:52 pm by metime00 »
Logged
Live long if you can, and prosper by any means necessary.  Any means, Urist.  So pull that lever, or by Armok, I'll lock you outside come next siege.
He who plays with dwarves must take care that he does not become a dwarf.  And when you stare into DwarfFort, Dwarffort stares back into you.

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #47 on: October 12, 2010, 04:28:01 pm »

I lol'd on the guy asking me to cite because I said something that he couldn't actually argue against. Nobody in this thread is doing any citing. Maybe just google, or read a book? I've done the latter!

AIs do not have emotions. Intellectualism doesn't make them alive. The computers can already crunch numbers way better and faster than we can. They are, technically, 'smarter' than we are, by a grade school definition. But we have no problem with burning out our CPUs with advanced games and prolonged usage. Why? Because they're not alive. They're just a series of code designed with actions and reactions. If the computer is faced with something that the designer had not thought of during its creation, it will be unable to react. It won't contemplate and try to find a new solution.

It also won't cry if you, as its friend, dies. Unless the designer told it to. In which case, it isn't really showing remorse.
The way a CPU is designed is fundamentally different than a brain. The code for an AI is similar to our DNA. It's a framework. We learn within that framework. An AI must learn, or at least one must (you can make copies of an AI that are still just as intelligent as the original). Learning is the difference. The way your mind works on a higher level is not governed by DNA, just as the way an AI's mind works would not be governed by the programmer. Its emotions would be just as sincere as yours.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Leafsnail

  • Bay Watcher
  • A single snail can make a world go extinct.
    • View Profile
Re: Killing AI's
« Reply #48 on: October 12, 2010, 04:28:15 pm »

Depends - at some point, it may be possible to have a self-improving AI.
Logged

Criptfeind

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #49 on: October 12, 2010, 04:32:07 pm »

Huh. I read the first two pages, got bored and switched to the last page.

Have you guys started to argue with people who are talking about real computers on one side and people who are talking about lol sifi computers on the other yet?
Logged

Grakelin

  • Bay Watcher
  • Stay thirsty, my friends
    • View Profile
Re: Killing AI's
« Reply #50 on: October 12, 2010, 04:32:57 pm »

How do you propose a designer 'writes emotion' into the code of the AI?
Logged
I am have extensive knowledge of philosophy and a strong morality
Okay, so, today this girl I know-Lauren, just took a sudden dis-interest in talking to me. Is she just on her period or something?

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #51 on: October 12, 2010, 04:34:02 pm »

By simulating a human brain (and associated hormones, just for you).
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Criptfeind

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #52 on: October 12, 2010, 04:35:30 pm »

Wait. Why the hell would we ever give computers emotions?
Logged

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: Killing AI's
« Reply #53 on: October 12, 2010, 04:36:17 pm »

In order to show AI can not be 'alive' or have emotions, you must first show that such things are not the result of purely deterministic properties. Because if they are indeed the result of, as you say, neurons and chemicals, they can indeed be simulated and as such can do anything a human mind can do.
Logged

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #54 on: October 12, 2010, 04:36:33 pm »

Wait. Why the hell would we ever give computers emotions?
It doesn't have to work like Terminator and I, Robot. Benevolent AI is much more likely.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

metime00

  • Bay Watcher
  • Adequate Dwarf Fortresser
    • View Profile
Re: Killing AI's
« Reply #55 on: October 12, 2010, 04:37:23 pm »

Huh. I read the first two pages, got bored and switched to the last page.

Have you guys started to argue with people who are talking about real computers on one side and people who are talking about lol sifi computers on the other yet?

Sadly, it's already devolved into this. More specifically, Grakelin on the side of real computers.

How do you propose a designer 'writes emotion' into the code of the AI?

The methods in which we create a strong AI is not the point of the discussion, the point is to figure out how we should treat them once/if they exist.
Logged
Live long if you can, and prosper by any means necessary.  Any means, Urist.  So pull that lever, or by Armok, I'll lock you outside come next siege.
He who plays with dwarves must take care that he does not become a dwarf.  And when you stare into DwarfFort, Dwarffort stares back into you.

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #56 on: October 12, 2010, 04:38:49 pm »

You're underestimating the speed at which technology accelerates. What we're talking about will become possible within a few decades.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Criptfeind

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #57 on: October 12, 2010, 04:40:09 pm »

Wait. Why the hell would we ever give computers emotions?
It doesn't have to work like Terminator and I, Robot. Benevolent AI is much more likely.
Yeah but why. We can make a mommy AI if we want without emotions. Who gives a crap, only a disturbed person would give a computer emotions, that only leads to wants and then to the possibility that we can not provided for those wants.

Sadly, it's already devolved into this. More specifically, Grakelin on the side of real computers.

Oh well good on him then. I agree with Grakelin until further notice.
Logged

metime00

  • Bay Watcher
  • Adequate Dwarf Fortresser
    • View Profile
Re: Killing AI's
« Reply #58 on: October 12, 2010, 04:41:32 pm »

Sadly, it's already devolved into this. More specifically, Grakelin on the side of real computers.

Oh well good on him then. I agree with Grakelin until further notice.

Except the discussion is about "lol sifi computers", not modern day computers.
Logged
Live long if you can, and prosper by any means necessary.  Any means, Urist.  So pull that lever, or by Armok, I'll lock you outside come next siege.
He who plays with dwarves must take care that he does not become a dwarf.  And when you stare into DwarfFort, Dwarffort stares back into you.

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: Killing AI's
« Reply #59 on: October 12, 2010, 04:43:27 pm »

We have to remember that when it comes to Strong AI, we will almost certainly be dealing with Blue and Orange Morality at some point, either at first or later on. That will depend on if the first method for making a Strong AI is of the "whatever works best" or "made in our image" type. Both seem inevitable, but I can't really say which would be first.

Er... That's a terrible idea, unless you want SkyNet et. al. Humans aren't desensitized to death, we just avoid thinking about it. If there was a tiny switch that could turn off death, we would flip it, and we would viciously (and IMO rightfully) fight anyone to the death that was trying to prevent us from flipping it. Any AI at this level will be better off going to a psychologist than an AI specialist for 'maintenance', anyway.
I would argue that humans ARE at least partly desensitized to death. At some point in their lives, people eventually come to accept the fact that their time is very limited. But the point I was more getting at is that the death of a member of an otherwise immortal AI "race" would seem unthinkable in the point of view of an AI. I kinda like the idea of mortal AI's. I don't think people would appreciate the thought of creating a sentient entity that would outlive it's creator.
I'd like to remind you of Freud's still debated "Death Instinct", also known as Thanatos. It suggests that we are not desensitized to death, but rather that deep in our subconcious everyone has a desire to die. He used it to explain things such as drug use and why some people who are afraid of hights aren't afraid that they'll fall. They're afraid that they'll jump.
I'm not really certain if we'd accept immortality or not, given that.

Warning - while you were typing 9 new replies have been posted. You may wish to review your post  >:(
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.
Pages: 1 2 3 [4] 5 6 ... 21