Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 6 7 [8] 9 10 ... 21

Author Topic: Killing AI's  (Read 19575 times)

metime00

  • Bay Watcher
  • Adequate Dwarf Fortresser
    • View Profile
Re: Killing AI's
« Reply #105 on: October 12, 2010, 05:41:51 pm »

If you created a 'biocomputer', it would be an experiment in genetics, not computing.

And simulation is not the same as 'being'. If you think it is, you are dangerous and should not be playing video games to start with. To make sure you're perfectly clear, you do not really get to drive a submarine in SH3. You do not actually drive a car really fast in GTA. The AI is not really alive and thinking.

There is no distinguishable difference between the intelligence of our proposed AI and a human. By your definition of alive and thinking, we aren't alive and thinking either.

We're simply going through the motions as our neurons have dictated we should, as an AI would do with its CPU when presented with a situation. It's just that theirs will be with transistors and ours is with neurons.

You fail to see that the makeup of the intelligence doesn't make it any less intelligent or more intelligent. They are equal. Both living. Both thinking.
Logged
Live long if you can, and prosper by any means necessary.  Any means, Urist.  So pull that lever, or by Armok, I'll lock you outside come next siege.
He who plays with dwarves must take care that he does not become a dwarf.  And when you stare into DwarfFort, Dwarffort stares back into you.

Eagleon

  • Bay Watcher
    • View Profile
    • Soundcloud
Re: Killing AI's
« Reply #106 on: October 12, 2010, 05:42:41 pm »

Psudeononymous: If humans are so imperfect and flawed, why make more of them? Why replicate more problems?

Are you advocating the elimination of the jewish people?
Logged
Agora: open-source, next-gen online discussions with formal outcomes!
Music, Ballpoint
Support 100% Emigration, Everyone Walking Around Confused Forever 2044

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: Killing AI's
« Reply #107 on: October 12, 2010, 05:43:52 pm »

True, but we are rather curious buggers, and as such there is quite a lot we do which doesn't really have a point other than to see if we can. Not to mention, the idea is just plain awesome. So while yeah, it probably wouldn't be that good an idea to create a bunch of super-human level AI modelled after humans, there's a good probability we will if given half a chance.

Also:

Code: [Select]
#include <iostream>
using namespace std;
int main()
{
cout<<"\"Daddy dont kill me\""<<endl;
system("pause");
return 0;
}
Logged

Virex

  • Bay Watcher
  • Subjects interest attracted. Annalyses pending...
    • View Profile
Re: Killing AI's
« Reply #108 on: October 12, 2010, 05:43:56 pm »

It's not a bad idea to give a computer emotions. If it doesn't feel fear, what is there to keep it from acting in ways dangerous to itself? If it feels no compassion, how to make it care for others and prevent it from for example using the sink to dump chemicals because that's the optimal solution? One could program all this, but the point of a strong AI is that you can't and don't program for every possible situation. Rules are inflexible and can never cover all cases, while emotions and intuition can.
Logged

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #109 on: October 12, 2010, 05:44:42 pm »

Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Realmfighter

  • Bay Watcher
  • Yeaah?
    • View Profile
Re: Killing AI's
« Reply #110 on: October 12, 2010, 05:46:14 pm »

Code: [Select]
#include <iostream>
using namespace std;
int main()
{
cout<<"\"Daddy dont kill me\""<<endl;
system("pause");
return 0;
}

I kind of meant when they got smart enough to guilt us into not killing them, but hell, what works fucking works.
Logged
We may not be as brave as Gryffindor, as willing to get our hands dirty as Hufflepuff, or as devious as Slytherin, but there is nothing, nothing more dangerous than a little too much knowledge and a conscience that is open to debate

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #111 on: October 12, 2010, 05:47:46 pm »

Code: [Select]
#include <iostream>
using namespace std;
int main()
{
cout<<"\"Daddy dont kill me\""<<endl;
system("pause");
return 0;
}

I kind of meant when they got smart enough to guilt us into not killing them, but hell, what works fucking works.
fucking pirates. I thought Criptfeind had changed his tune for a minute there.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Zangi

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #112 on: October 12, 2010, 05:59:06 pm »

It's not a bad idea to give a computer emotions. If it doesn't feel fear, what is there to keep it from acting in ways dangerous to itself? If it feels no compassion, how to make it care for others and prevent it from for example using the sink to dump chemicals because that's the optimal solution? One could program all this, but the point of a strong AI is that you can't and don't program for every possible situation. Rules are inflexible and can never cover all cases, while emotions and intuition can.
Fear: Logic, tell them if they do X, Y can happen.  General risk vs urgency/priority programming.  Do not risk doing X unless there is no 'safer' alternative and it absolutely needs to be done within a limited time. 

Compassion: More logic.  This is the 'optimal' but not a desirable solution.  Look for alternatives.  If no current alternatives are available, check if there will be an alternative later and use that.

AI... shouldn't be lazy... right?  And have infinite patience and memory...

'Learning' AI...  You don't need emotion to learn...

EDIT: Different levels of 'urgent' need to be programmed in.  Ranging from 'Life and Death' to 'Getting Inconvenienced'...
« Last Edit: October 12, 2010, 06:03:22 pm by Zangi »
Logged
All life begins with Nu and ends with Nu...  This is the truth! This is my belief! ... At least for now...
FMA/FMA:B Recommendation

Realmfighter

  • Bay Watcher
  • Yeaah?
    • View Profile
Re: Killing AI's
« Reply #113 on: October 12, 2010, 06:05:18 pm »

We should digitalize a human brain however we can, and then give it all the computing power we can for it to improve itself.

And get some practice kneeling.
Logged
We may not be as brave as Gryffindor, as willing to get our hands dirty as Hufflepuff, or as devious as Slytherin, but there is nothing, nothing more dangerous than a little too much knowledge and a conscience that is open to debate

Grakelin

  • Bay Watcher
  • Stay thirsty, my friends
    • View Profile
Re: Killing AI's
« Reply #114 on: October 12, 2010, 06:08:29 pm »

If you created a 'biocomputer', it would be an experiment in genetics, not computing.

And simulation is not the same as 'being'. If you think it is, you are dangerous and should not be playing video games to start with. To make sure you're perfectly clear, you do not really get to drive a submarine in SH3. You do not actually drive a car really fast in GTA. The AI is not really alive and thinking.

There is no distinguishable difference between the intelligence of our proposed AI and a human. By your definition of alive and thinking, we aren't alive and thinking either.

We're simply going through the motions as our neurons have dictated we should, as an AI would do with its CPU when presented with a situation. It's just that theirs will be with transistors and ours is with neurons.

You fail to see that the makeup of the intelligence doesn't make it any less intelligent or more intelligent. They are equal. Both living. Both thinking.

Why are you you? Why do you only see through your own eyes? Why can't you see out of other people's eyes, instead? Why do you have a conciousness?

We can't really answer these questions in an accurate way (without being extremely pretentious, and I know somebody will come in and be just that). But why do we think the computer will get this just because we do? What you're suggesting is a complete recreation of the human brain, including sentient thought, self awareness, and conciousness. I'll admit, if somebody pulled that off, you could probably argue that is is alive. But it wouldn't be an AI if you created a 'biocomputer'. It would just be another organism. But if it doesn't have a body of its own, if it's just an array of data streaming through a microchip, it is not alive any more than what I am typing into this post is.

The AI doesn't get to have a conciousness. It can't. It gets a bunch of reactions, but that is all.
Logged
I am have extensive knowledge of philosophy and a strong morality
Okay, so, today this girl I know-Lauren, just took a sudden dis-interest in talking to me. Is she just on her period or something?

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #115 on: October 12, 2010, 06:10:37 pm »

It gets a bunch of reactions, but that is all.
so. do. you.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Virex

  • Bay Watcher
  • Subjects interest attracted. Annalyses pending...
    • View Profile
Re: Killing AI's
« Reply #116 on: October 12, 2010, 06:10:56 pm »

For an AI to operate it would have to be able to do risk/gain assessments for complex situations and it would have to do said assessment taking into account an undefined set of potential others. On top of that it'll have to make the decisions with imperfect information and likely based upon prior experience (or else it wouldn't be a learning system). That taken together is getting awfully close to the way human emotions work. I would expect that the logic causes something to emerge that is similar to feelings.


Also, Grakilin, the concept of a microchip is a bit odd when you're talking about a biocomputer, because a biocomputer is essentially a biochemical reaction that does the computing. There is no silicon involved. (I also highly doubt that by the time strong AI's come about we're still doing everything in silicon, since quantum computers and plastic semiconductors are getting pretty close and for really fast calculations you'd probably also need optical computing and meta-materials, which are often made from d-block metals. Then there's still the possibility of grafene replacing silicon for the high-speed electrical circuits)
« Last Edit: October 12, 2010, 06:14:38 pm by Virex »
Logged

Grakelin

  • Bay Watcher
  • Stay thirsty, my friends
    • View Profile
Re: Killing AI's
« Reply #117 on: October 12, 2010, 06:18:14 pm »

I was referring to the microchip seperately from the biocomputer. The biocomputer is just a living organism. It is probably built around or modelled off of something that already lived. It's not really an AI, as far as I'm understanding it. It is a practice of genetics.

I don't think being able to think rationally and logically has a direct correlation to feelings and emotions, myself. Oftentimes, our emotions don't lead us into logical and rational choices.

nbonaparte: Let's go about this in a different way, since you seem to be getting frustrated. Where do you find joy in life?
Logged
I am have extensive knowledge of philosophy and a strong morality
Okay, so, today this girl I know-Lauren, just took a sudden dis-interest in talking to me. Is she just on her period or something?

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #118 on: October 12, 2010, 06:24:00 pm »

Okay, why not. Other people (sometimes), accomplishment, knowledge, the universe around me.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Virex

  • Bay Watcher
  • Subjects interest attracted. Annalyses pending...
    • View Profile
Re: Killing AI's
« Reply #119 on: October 12, 2010, 06:28:03 pm »

I was referring to the microchip seperately from the biocomputer. The biocomputer is just a living organism. It is probably built around or modelled off of something that already lived. It's not really an AI, as far as I'm understanding it. It is a practice of genetics.
Erm, the genes are codes for information, just like voltages are codes for information in an electrical computer, photons in a photonic computer or electron spin in a quantum computer. It's as much a computer as an electric computer or for that mater a babbage machine, it's inner workings just differ. So I'd claim that it is in fact capable of running an AI (Come to think of it, the assembly language of a bioprocessor would look WEIRD)

Quote
I don't think being able to think rationally and logically has a direct correlation to feelings and emotions, myself. Oftentimes, our emotions don't lead us into logical and rational choices.
Pure logic is only possible with full knowledge of all involved variables. The whole point of a strong AI is that it only needs a small amount of information to draw a conclusion, instead of all possible information like a state machine would need. Now the interesting thing with our feeling is that they combine small bits of internal and external information to draw a conclusion. It's not really based upon formal logic like a computer, but most of our reasoning isn't. That's precisely because it's almost impossible to work with hard logic in the real world. Machine "feelings" would work in a similar way, by gathering small amounts of information and combining them with educated guesses. The machine might experience this consciously, but the fundamental principle is very similar to how we form emotions.[/quote]
Logged
Pages: 1 ... 6 7 [8] 9 10 ... 21