Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 15 16 [17] 18 19 ... 21

Author Topic: Killing AI's  (Read 18837 times)

Grimlocke

  • Bay Watcher
  • *kobold noises*
    • View Profile
Re: Killing AI's
« Reply #240 on: October 13, 2010, 01:36:44 am »

Im not realy sure how you would classify an AI as dead. I guess you could say its when you shut down the AI pernamentely, which would be closest to clinical death. But in that case killing off AI controlled computer characters wouldnt realy kill the actual AI, as the game can simply spawn more characters and thus continue to function.

And while dont see any reason to assume an AI would have an actual conciousness, it might be rather... distasteful to kill the AI over and over just for laughs.

Also calling an AI 'human' is just inaccurate. A human isnt just a human brain or anything that behaves like it. AIs will allways be fundamentally different from humans, if only for the lack of an actual human body.

Lastly I must wonder if this perfectelly simulated human mind wouldnt just go into a horrible existential crisis and wind up killing itself. It must be quite frustrating to be a human mind stuck in some guys desktop computer.
Logged
I make Grimlocke's History & Realism Mods. Its got poleaxes, sturdy joints and bloomeries. Now compatible with DF Revised!

Sir Pseudonymous

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #241 on: October 13, 2010, 01:49:01 am »

Presumably, a strong AI made for a game would more fill the role a human GM would, controlling objects and circumstances as a human would, just in a far more integrated, dedicated, and presumably competent manner than said human. So killing something in game would no more touch the AI than killing the imaginary foes drawn up by a human does said human.
Logged
I'm all for eating the heart of your enemies to gain their courage though.

Virex

  • Bay Watcher
  • Subjects interest attracted. Annalyses pending...
    • View Profile
Re: Killing AI's
« Reply #242 on: October 13, 2010, 04:03:16 am »

Im not realy sure how you would classify an AI as dead. I guess you could say its when you shut down the AI pernamentely, which would be closest to clinical death. But in that case killing off AI controlled computer characters wouldnt realy kill the actual AI, as the game can simply spawn more characters and thus continue to function.
Permanently erasing the program would be closer to death. Not running it would be like a permanent comatose state.
Logged

Grimlocke

  • Bay Watcher
  • *kobold noises*
    • View Profile
Re: Killing AI's
« Reply #243 on: October 13, 2010, 05:13:08 am »

Im pretty sure a comatose humans brain isnt completely idle, but point taken, I guess braindeath wouldnt occur without any actual damage to the brain.
Logged
I make Grimlocke's History & Realism Mods. Its got poleaxes, sturdy joints and bloomeries. Now compatible with DF Revised!

ECrownofFire

  • Bay Watcher
  • Resident Dragoness
    • View Profile
    • ECrownofFire
Re: Killing AI's
« Reply #244 on: October 13, 2010, 05:40:08 am »

I think the thing that everyone has to realize is that the AI in your game is NOT sapient, self-aware, or anything really. It's a programmed set of instructions. A Strong AI is an AI that can actually do something that the original programming planned for, it can learn. The AI in your game can learn, sure, but it's only in that specific area. When you shut it down, those are erased. It can't even properly be called an AI, it has no intelligence, it has a preprogrammed set of actions and reactions. The AI that we should be talking about is a Strong AI, which is also known as "artificial general intelligence". The key word here is "general". An AI in a game is specifically meant for that game, and doesn't even come close to a Strong AI, no matter how good it is.

There isn't even much that a Strong AI can do, that a regular old AI can do. Almost any purpose you can think of, you can program an AI for, even a very vague area. A Strong AI is actually virtually useless in the category of doing any tasks. The only reason I can think of to create one is for philosophy and things related to it. Otherwise, we have zero reason to even build one in the first place. Unless we wanted to "upload" your brain or something of course, but that's not definitely possible. Even then, it's just the empty shell of an AI, built as a replacement for your brain.
Logged

Soadreqm

  • Bay Watcher
  • I'm okay with this. I'm okay with a lot of things.
    • View Profile
Re: Killing AI's
« Reply #245 on: October 13, 2010, 07:05:37 am »

As for AI dying, I don't see why it should be any different from a human dying. It's dead when it's gone. Permanently shut down. The only difference is that humans are more difficult to turn off temporarily.

Killing characters in a video game has nothing to do with this, of course. You're not killing anything. You're just making objects act out their death scripts. A walking, talking video game NPC is no more alive than a video game NPC ragdoll being hurtled through the air by the physics engine, or a mailbox. Some objects are just scripted to act in a certain way when you perform certain actions on them, such as a Combine soldier falling down when you shoot it enough, or a terrain piece moving when you flip a switch.

It seems to me that the first benefit of an AI capable of learning is that you basically have an immortal human mind with an infinite capacity for knowledge. Hanging around for an infinitely long time constantly studying something sounds like it'd result in an entity that is pretty good at doing it. Even assuming that it doesn't just alter itself to become better at learning.
Logged

Eagleon

  • Bay Watcher
    • View Profile
    • Soundcloud
Re: Killing AI's
« Reply #246 on: October 13, 2010, 07:39:29 am »

A Strong AI is actually virtually useless in the category of doing any tasks. The only reason I can think of to create one is for philosophy and things related to it. Otherwise, we have zero reason to even build one in the first place.
It seems to me that the first benefit of an AI capable of learning is that you basically have an immortal human mind with an infinite capacity for knowledge. Hanging around for an infinitely long time constantly studying something sounds like it'd result in an entity that is pretty good at doing it. Even assuming that it doesn't just alter itself to become better at learning.
This. Particularly the last part. A Strong AI would be creative and at the same time much more powerful than a human mind. It couldn't match specialized processes, no, but like humans, it can create them. Also decreased resource usage, and increased flexibility - it's very hard to make add-ons for the human body because it can result in painful death. This is especially interesting in terms of altered sensory awareness and interconnectivity with other AIs.
Logged
Agora: open-source, next-gen online discussions with formal outcomes!
Music, Ballpoint
Support 100% Emigration, Everyone Walking Around Confused Forever 2044

Vactor

  • Bay Watcher
  • ^^ DF 1.0 ^^
    • View Profile
Re: Killing AI's
« Reply #247 on: October 13, 2010, 08:43:42 am »

Another good point made earlier that kinda got lost, is that this kind of AI isn't reliant on better hardware, unless you're trying to emulate a human mind in real time.  The bigger issue is the programing, and the understanding of what it is that creates self awareness.  We have the technology right now to run the amount of processing it takes, it'd just take longer(i.e. one second's worth of sentient thought calculated over the course of an hour)  I find this incredibly interesting as it ties into the idea of emulating an entire universe, if you do, it seems to me that you actually create a universe, and if one were to emulate our universe you would create our universe, even if you're doing so much slower than real time.

As far as how i think that this will actually pan out in law, I have a feeling a human-centric viewpoint will win out in legislatures, both from religious factions and scientific factions, as they both hold a "only humans are self aware" sentiment.  Strong AI's will not gain the same rights as humans, just as animals that could possibly be self aware are not granted the same rights.  This is interesting because I still think that the religious viewpoint calls for the protection of sentient AI.  It is equivalent to an artificial soul.  If you want god to treat you a certain way, it would behoove you to act in the manner that you wish others to act towards you.

And Nikov, i'm more than willing to discuss this with you, ad absurdum is a legitimate form of argument in my book, if it made your definition of human seem foolish that isn't because i'm not discussing in good faith.

Logged
Wreck of Theseus: My 2D Roguelite Mech Platformer
http://www.bay12forums.com/smf/index.php?topic=141525.0

My AT-ST spore creature http://www.youtube.com/watch?v=0btwvL9CNlA

Shades

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #248 on: October 13, 2010, 09:40:27 am »

Another good point made earlier that kinda got lost, is that this kind of AI isn't reliant on better hardware, unless you're trying to emulate a human mind in real time.  The bigger issue is the programing, and the understanding of what it is that creates self awareness.  We have the technology right now to run the amount of processing it takes, it'd just take longer(i.e. one second's worth of sentient thought calculated over the course of an hour)  I find this incredibly interesting as it ties into the idea of emulating an entire universe, if you do, it seems to me that you actually create a universe, and if one were to emulate our universe you would create our universe, even if you're doing so much slower than real time.

I'm not sure this is true, of course it depends what you count as hardware and as programming as both dictate the logic of the software running. But although raw speed isn't an issue the way information is past is and it's possible the current set of hardware we use is not capable of this. This of course might just be lack of understanding as you point out.

We do know that the human brain isn't simply a massively parallel set of processors and also that the whole neural system as well as the chemical reactions around it, and not just what we think of as the brain, effect the thought process significantly.
Logged
Its like playing god with sentient legos. - They Got Leader
[Dwarf Fortress] plays like a dizzyingly complex hybrid of Dungeon Keeper and The Sims, if all your little people were manic-depressive alcoholics. - tv tropes
You don't use science to show that you're right, you use science to become right. - xkcd

Duke 2.0

  • Bay Watcher
  • [CONQUISTADOR:BIRD]
    • View Profile
Re: Killing AI's
« Reply #249 on: October 13, 2010, 09:49:22 am »

This. Particularly the last part. A Strong AI would be creative and at the same time much more powerful than a human mind. It couldn't match specialized processes, no, but like humans, it can create them. Also decreased resource usage, and increased flexibility - it's very hard to make add-ons for the human body because it can result in painful death. This is especially interesting in terms of altered sensory awareness and interconnectivity with other AIs.
This is a bit of a stretch. It just feels like people suggesting household robot servants fueling a society of human laziness after somebody makes the first warehouse-sized computer. Sure we completed a component to theoretically make it work, but there are dozens of things that need to be invented before our theory on how this is supposed to work can be tested. We would need to cast off our understanding of how AI would work, which means the possibilities are who the hell knows.
Logged
Buck up friendo, we're all on the level here.
I would bet money Andrew has edited things retroactively, except I can't prove anything because it was edited retroactively.
MIERDO MILLAS DE VIBORAS FURIOSAS PARA ESTRANGULARTE MUERTO

Shinziril

  • Bay Watcher
  • !!SCIENCE!!
    • View Profile
Re: Killing AI's
« Reply #250 on: October 13, 2010, 09:56:11 am »

I'm still amazed nobody linked to Creating Friendly AI yet.

Oh wait, I just did.  Go on, have a look- it's quite good, if a bit long. 
Logged
Quote from: lolghurt
Quote from: Urist McTaverish
why is Dwarven science always on fire?
Because normal science is boring

Nikov

  • Bay Watcher
  • Riverend's Flame-beater of Earth-Wounders
    • View Profile
Re: Killing AI's
« Reply #251 on: October 13, 2010, 10:13:40 am »

And Nikov, i'm more than willing to discuss this with you, ad absurdum is a legitimate form of argument in my book, if it made your definition of human seem foolish that isn't because i'm not discussing in good faith.

No Vactor. Claiming I'm victimizing you and calling you a Nazi is not a legitimate form of argument, and never even touched on my definition of human. I should never have spoken to you in the first place. I honestly mistook you for Vector, who as I recall is a deep-minded math student and not a raging lunatic.
Logged
I should probably have my head checked, because I find myself in complete agreement with Nikov.

Vactor

  • Bay Watcher
  • ^^ DF 1.0 ^^
    • View Profile
Re: Killing AI's
« Reply #252 on: October 13, 2010, 11:16:45 am »

And Nikov, i'm more than willing to discuss this with you, ad absurdum is a legitimate form of argument in my book, if it made your definition of human seem foolish that isn't because i'm not discussing in good faith.

No Vactor. Claiming I'm victimizing you and calling you a Nazi is not a legitimate form of argument, and never even touched on my definition of human. I should never have spoken to you in the first place. I honestly mistook you for Vector, who as I recall is a deep-minded math student and not a raging lunatic.

Perhaps I misunderstood what you were getting at in your 'goodwins'  post, but as i understood your argument, it was that by respecting humans because they are sentient, one opens a pandora's box of eugenics, and killing of the disabled, or those deemed unfit. (something that was going on under the nazis)

Your argument seemed to be an attempt to cast those who don't found their moralities on religious principle to be easily bereft of morality.  I found it silly for you to try to suggest that because of this I would somehow arrive at different moral conclusions than you about the treatment of other people, and didn't really think it was anything but rhetorical questions to try to build your argument.

If you're still unclear why I see your definiton of human as being insufficient after the corpse analogy, imagine a person were to have their arm cut off, by your definition that arm is human, and should have all of the same rights that it had when it was part of the person.  It isn't sentient, has no sense of self, but fulfills all of your qualifications.  I'm sure we all agree that the arm is not a human, nor is it any longer part of a human (unless it is reattached immediately.)

The raging lunatic part is confusing too.
Logged
Wreck of Theseus: My 2D Roguelite Mech Platformer
http://www.bay12forums.com/smf/index.php?topic=141525.0

My AT-ST spore creature http://www.youtube.com/watch?v=0btwvL9CNlA

Soadreqm

  • Bay Watcher
  • I'm okay with this. I'm okay with a lot of things.
    • View Profile
Re: Killing AI's
« Reply #253 on: October 13, 2010, 11:45:55 am »

This is a bit of a stretch. It just feels like people suggesting household robot servants fueling a society of human laziness after somebody makes the first warehouse-sized computer. Sure we completed a component to theoretically make it work, but there are dozens of things that need to be invented before our theory on how this is supposed to work can be tested. We would need to cast off our understanding of how AI would work, which means the possibilities are who the hell knows.

Well, if making proper AI is flat out impossible, that solves the related ethical problems quite elegantly. :P I don't see why it would be, though. Humans are capable of learning and handling abstract concepts with fairly crude brains. Why would it be impossible to duplicate? The robotic household servants could still happen, by the way. We just need the AI.

No, I don't value other humans because of their sentience. I value them because we are of the same clay, made by the same Creator, etc.

Since you apparently didn't abandon this thread, would you care to elaborate on this? If we assume a creator, isn't EVERYTHING of the same clay? From humans to cats to oceans? What separates people from the background noise? I think intelligence would be an obvious candidate; humans are special because they can think. And in this case, any sentient artificial intelligences would essentially be human, deserving of all rights other humans have.
Logged

Duke 2.0

  • Bay Watcher
  • [CONQUISTADOR:BIRD]
    • View Profile
Re: Killing AI's
« Reply #254 on: October 13, 2010, 11:57:28 am »

 I just challenge the idea of an artificial brain being inherently better than a human one simply by saying it would have unlimited potential to learn. We technically do as well, but there are many other limiting factors in the way it's designed that are part of why the brain is so powerful. One could probably make a brain better than the human one at specific things(Like computers and complex quick calculations), but any advancements would probably just be mixing and matching disadvantages and advantages that leave it overall around the same 'level' as the human brain, although perhaps in very different fields and aspects.

 I'm not gonna touch up on the morality bit though. I'll just keep things simple and say that AI is not the same as humanity and hope that AI doesn't advance fast enough to make me a crockety old racist.
Logged
Buck up friendo, we're all on the level here.
I would bet money Andrew has edited things retroactively, except I can't prove anything because it was edited retroactively.
MIERDO MILLAS DE VIBORAS FURIOSAS PARA ESTRANGULARTE MUERTO
Pages: 1 ... 15 16 [17] 18 19 ... 21