Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 17 18 [19] 20 21

Author Topic: Killing AI's  (Read 18840 times)

Bauglir

  • Bay Watcher
  • Let us make Good
    • View Profile
Re: Killing AI's
« Reply #270 on: October 14, 2010, 10:35:41 pm »

-snip-
« Last Edit: June 09, 2015, 09:03:14 pm by Bauglir »
Logged
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky. “I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied. “Why is the net wired randomly?”, asked Minsky. “I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes. “Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.

Eagleon

  • Bay Watcher
    • View Profile
    • Soundcloud
Re: Killing AI's
« Reply #271 on: October 14, 2010, 10:57:17 pm »

Now, even to ignore the fact that populating a society with sapient androids is so improbable as to be impossible (though the notion of sapient machines is not), for reasons to be argued below, a sapient android would still be a machine crafted by humans, and thus its mind would be entirely customizable. It wouldn't be an animal with the impulses of an animal, nor would it be necessary to have the same idea of "rational behavior" as a human. As silly as the Three Rules are from a design standpoint, any such manner of arbitrary rules could be implanted into its mind.
Are you sure about this? If you look at neural networks, for instance, behavior is extremely emergent and the result of a comparatively small ruleset multiplied millions or billions of times to build the brain. How do you bias a general purpose AI towards a specific set of rules without an overseer intelligence to check its every move? If you're using an expert/deductive AI hybrid (pretty much the only feasible type of AI useful in a research setting), how do you figure out everything that it could possibly do, or generalize rules that force it to deduct its own conclusion, and be sure it won't mess up and decide on something horrifying?

I think this is where sci-fi authors have exaggerated our ability to control general machine intelligence. We can bias them towards being focused, or friendly, or less violent, since these things in humans are glandular or fairly basic. But we can't put restrictions around their thoughts and actions without stunting them completely.
Logged
Agora: open-source, next-gen online discussions with formal outcomes!
Music, Ballpoint
Support 100% Emigration, Everyone Walking Around Confused Forever 2044

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #272 on: October 15, 2010, 12:42:01 am »

Yeah, that's kind of a problem. It's not an elegant solution, but a mass-produced expert system working as a sort of "ethical module" could, on a high level, disallow behaviors deemed negative. the problem there is getting it to understand what's good and what's bad. I suppose you could copy a neural network that's gotten it right.

And neural networks are the only known way of creating intelligence, but not the only possible way. Any turing-complete mechanism could do it. Now, the serial processing of a CPU wouldn't work that well, it would run into the same sort of issues DF runs into with bottlenecking.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Muz

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #273 on: October 15, 2010, 10:20:35 am »

Ah, I'm doing the kind of stuff that'll enable someone to do this. Well, simple solution is that that we're extremely far away from AI that could kill people. AI can identify lines and stuff, but nowhere near the same level as the human brain. And even if so, it's easy to confuse them.

There's a myth being perpetrated by school teachers and motivation people that the brain is a ultra-powerful supercomputer. It's not. It's a pattern recognition database, optimized for that. It's why people memorize their multiplication tables. It's why humans take a very long time to calculate 30032+17430, whereas a simple computer does that in microseconds. It's also why robots still have a lot of trouble doing facial recognition, when a person could recognize a face he hasn't seen in 5 years instantly.

Neural networks are great, but right now, the Internet is the closest thing to a functioning neural network, yet not quite there yet. There's a lot of brilliant brain scientists who think that the Internet could be the first human invention to gain consciousness. Seeing how big the Internet is, it's nearly impossible for you to squeeze it into one killer robot. As it is, we're lucky to build something with the intelligence of a cow.

That said, if you did have super AI more advanced than this, it'd be ahead of the technological singularity... which is to say, that you can't extrapolate what we know now to guess what they're capable of doing. Let's wait until the Internet gains consciousness as a super AI, then figure out what happens.
Logged
Disclaimer: Any sarcasm in my posts will not be mentioned as that would ruin the purpose. It is assumed that the reader is intelligent enough to tell the difference between what is sarcasm and what is not.

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #274 on: October 15, 2010, 10:48:47 am »

That said, if you did have super AI more advanced than this, it'd be ahead of the technological singularity... which is to say, that you can't extrapolate what we know now to guess what they're capable of doing. Let's wait until the Internet gains consciousness as a super AI, then figure out what happens.
A little nitpick: A super AI would be the trigger for the singularity. An intelligence capable of self-improvement. Not quite after the singularity, but pretty much the edge.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

GlyphGryph

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #275 on: October 15, 2010, 12:43:37 pm »

First, as to the original topic, the AI it would be wrong to kill is the AI that doesn't want to die. How smart it is doesn't have much to do with it.

Edit: Heh, reading the friendly AI thing
Quote
The Sysop Scenario also makes it clear that individual volition is one of the strongest forces in Friendliness; individual volition may even be the only part of Friendliness that matters - death wouldn't be intrinsically wrong; it would be wrong only insofar as some individual doesn't want to die.
Says pretty much the same thing about AIs killing people. :P

I most agree with Pseudonymous

People seem to be under the impression these emergent intelligences will have the same biological drives we do, name a desire to avoid death and continue existing. But these AI's won't have emerged from a strictly biological basic evolution influenced background. Their desires will not be the same as our own. I don't think we'll have fine control over its development as SP seems to think. There will be things that are unforeseen. But I find it hard to figure out why they would value self preservation unless we designed them to, environmentally or specifically.

Quote
Emotions are kind of a result of having a strong AI. You can't really have a "human brain" without emotions. Without a conscience? Sure, but that would be really bad (and completely stupid) to put in an AI.
Why does a Strong AI have to be anything like the human brain? And who's to say any "emotions" they have would be anything like the ones humans experience?
Quote
Well, simple solution is that that we're extremely far away from AI that could kill people.
What are you talking about? We have AI that can kill people right now. We could make AI that could kill them even better if thats the only thing we wanted them to do! I mean, the bulk of crappy AIs are built around "virtually" killing people, I can't imagine making some real world examples would be much more difficult, especially if you were ok with them killing indiscriminately.
« Last Edit: October 15, 2010, 01:01:11 pm by GlyphGryph »
Logged

Eagleon

  • Bay Watcher
    • View Profile
    • Soundcloud
subject
« Reply #276 on: October 15, 2010, 06:58:20 pm »

Why does a Strong AI have to be anything like the human brain?
It's the best example we have for the kind of AI we want - adaptive, creative, brilliant, with the edge of silicon to push it past humanity. People have tried to simplify and alter intelligence models to fit these applications. I think the closest to the goal is evolutionary programming, which might pan out into something interesting. So far it's fairly limited because defining the rules for its progression is incredibly demanding.
And who's to say any "emotions" they have would be anything like the ones humans experience?
This to me is extremely interesting, though scary. How do you predict the effects of an entirely new emotion or motive on the psyche? Or the absence of one? Sci-fi authors kind of cheat and simplify things when this happens, but emotional and personality development is incredibly complex and chaotic. At best we can predict their effect on the development of learning intelligence - we know from humans that someone without sight from birth can still learn to perceive spatially by touch, or even develop a kind of 'vision' from prosthesis on the tongue as one example (though maybe this is only with people that have had vision before, I'm forgetting). It seems reasonable that adding senses will not limit development in any one area. The effect of boredom and fatigue on learning, in the past considered to be detrimental, is now being examined as a possible positive role in development. What else are we missing here that might be crucial? I think the safest bet is to try, initially, for as much as is reasonable, including a physical body (remote-operated, obviously).
Ah, I'm doing the kind of stuff that'll enable someone to do this. Well, simple solution is that that we're extremely far away from AI that could kill people. AI can identify lines and stuff, but nowhere near the same level as the human brain. And even if so, it's easy to confuse them.
I'm extremely curious as to what sort of stuff you're working on, though you don't have to tell if you can't obviously :) I'm working on parts of this on my own, however egotistical that is, just to give myself further insight - it's a demanding hobby, and I have no illusions that I'll beat professional researchers to the prize (especially because I believe that it requires such a broad application of knowledge and engineering, plus ridiculous hardware), but I feel like I've made some leaps of insight of my own that may be of use to others. I just don't know anyone in the field enough that I could put the ideas forward to.
Logged
Agora: open-source, next-gen online discussions with formal outcomes!
Music, Ballpoint
Support 100% Emigration, Everyone Walking Around Confused Forever 2044

Muz

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #277 on: October 16, 2010, 03:34:22 am »

That said, if you did have super AI more advanced than this, it'd be ahead of the technological singularity... which is to say, that you can't extrapolate what we know now to guess what they're capable of doing. Let's wait until the Internet gains consciousness as a super AI, then figure out what happens.
A little nitpick: A super AI would be the trigger for the singularity. An intelligence capable of self-improvement. Not quite after the singularity, but pretty much the edge.

Trigger or not, what happens will be after the singularity. That is, if you can actually create a brilliant super AI, you won't know what it's capable of doing or how it'd act, and only the experts capable of building one would be able to make a decent guess.


Ah, I'm doing the kind of stuff that'll enable someone to do this. Well, simple solution is that that we're extremely far away from AI that could kill people. AI can identify lines and stuff, but nowhere near the same level as the human brain. And even if so, it's easy to confuse them.
I'm extremely curious as to what sort of stuff you're working on, though you don't have to tell if you can't obviously :) I'm working on parts of this on my own, however egotistical that is, just to give myself further insight - it's a demanding hobby, and I have no illusions that I'll beat professional researchers to the prize (especially because I believe that it requires such a broad application of knowledge and engineering, plus ridiculous hardware), but I feel like I've made some leaps of insight of my own that may be of use to others. I just don't know anyone in the field enough that I could put the ideas forward to.

Lol, I'm not building a killer AI on my own. It's not like in sci-fi where one mad scientist creates an army of killer robots from scratch.. there's a lot people don't know and a takes an army of very skilled, very experienced researchers to even get there.

I do things like signal processing, which is used for any kind of robot to recognize faces and objects. Also do things like control systems and electronics, so I can make a good guess of where robotics technology is these days, how fast they can point a weapon, etc.. and it's quite good as long as they can easily recognize the right target. Thesis was based on simulation of emotions. Mom's a psychologist (very good with the neural brain stuff), which helps me associate the current technology with actual biological intelligence. And I work on a killer robot game, so it's fun hobby :P

I did attend a lecture from the CTO of Raytheon a few days ago. It's a world leader in defense equipment, even had a pretty awesome exoskeleton thing, and they're still far from finding the solution to killer robots. It was very insightful on this sort of thing, though.

And one thing about having a super AI is that doesn't mean that you can even get a killer AI. At best, you'll get a giant brain in a jar. This giant brain might be able to design good robots, but it's still very difficult. If you get it in turrets and combat robots, at best, it's still like a very good marksman with very poor vision. The worst I could imagine it being is if you let it control a fighter plane or a ICBM, or some other weapon that doesn't have to be accurate to be destructive. Even then, it's questionable. People don't spend millions training fighter pilots and risking human lives for no reason.. the human mind is still far superior to the best military grade AI because they can make split-second decisions easily and react much better to unexpected changes.


If you want a short answer.. I'll give you this. With pre-singularity technology, I'd say that robots will, at best, be able to specialize in a type of attack. You might get super-snipers or something. But even then, once a human realizes how they work, they'd be able to create a specialized stealth technique that renders current killer robot models obsolete.
Logged
Disclaimer: Any sarcasm in my posts will not be mentioned as that would ruin the purpose. It is assumed that the reader is intelligent enough to tell the difference between what is sarcasm and what is not.

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: Killing AI's
« Reply #278 on: October 16, 2010, 01:54:14 pm »

though you don't have to tell if you can't obviously :)

Lol, I'm not building a killer AI on my own.
Uh huh, suuuure you aren't *wink.*
Logged

Eagleon

  • Bay Watcher
    • View Profile
    • Soundcloud
Re: Killing AI's
« Reply #279 on: October 16, 2010, 03:21:20 pm »

Thesis was based on simulation of emotions. Mom's a psychologist (very good with the neural brain stuff), which helps me associate the current technology with actual biological intelligence.
Very relevant to my interests! Is it published anywhere, or did they lock it up in some librarian's trophy cabinet as happens so often? This is something that I've worked on for a good two years, in independent study. Too many AI researchers ignore existing work in emotional intelligence and development, not even mentioning the role of a social environment.

Hehe I missed the part where conversation happened and turned to killer deathbots  :-[ I get excited and miss important details like that. Rest assured I have no hopes of creating an army of robotic minions. At best what I'm building will be a slightly creepier furby that can't move past visual range of my server.
Logged
Agora: open-source, next-gen online discussions with formal outcomes!
Music, Ballpoint
Support 100% Emigration, Everyone Walking Around Confused Forever 2044

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #280 on: October 16, 2010, 04:02:50 pm »

At best what I'm building will be a slightly creepier furby that can't move past visual range of my server.
So basically this?
Spoiler (click to show/hide)
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Sergius

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #281 on: October 16, 2010, 07:23:30 pm »

Or this?

Spoiler (click to show/hide)
Logged

Nikov

  • Bay Watcher
  • Riverend's Flame-beater of Earth-Wounders
    • View Profile
Re: Killing AI's
« Reply #282 on: October 16, 2010, 07:23:42 pm »

Oddly, I might have more difficulty 'killing' a sex-bot AI than a traditional AI, regardless of whether or not I actually made use of 'her'.
Logged
I should probably have my head checked, because I find myself in complete agreement with Nikov.

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #283 on: October 16, 2010, 07:35:02 pm »

And working off that, I can tell you what is actually going to happen. As soon as a computer becomes capable of intelligent speech, they will be accepted. It's sort of like cute animals. Just as people (even DF players ;D) are reluctant to kill a kitten, people will be reluctant to kill an AI that seems intelligent. Even if it's just a vast macro library, if it seems intelligent, people will accept it as intelligent.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: Killing AI's
« Reply #284 on: October 16, 2010, 07:40:11 pm »

Oddly, I might have more difficulty 'killing' a sex-bot AI than a traditional AI, regardless of whether or not I actually made use of 'her'.
Wouldn't your wife have long since murdered you in your sleep by the time killing a sex-bot became an issue?
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.
Pages: 1 ... 17 18 [19] 20 21