Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 ... 5 6 [7] 8 9 ... 21

Author Topic: Killing AI's  (Read 19551 times)

metime00

  • Bay Watcher
  • Adequate Dwarf Fortresser
    • View Profile
Re: Killing AI's
« Reply #90 on: October 12, 2010, 05:23:52 pm »

I claim victory by forfeit. Even if it is a hollow victory.

Anyway. It seems immoral to add emotions to AIs. That is my stance.

But if for some reason you do then they should be treated like animals.

Edit: Non human animals.

That's terrible. If we have emotion-ed, human intelligence AIs, to treat them like a dog simply because they aren't exactly like humans is a travesty.

Reactions like this are what would cause a Skynet type situation to occur.
Skynet is just 1 super AI controlling a lot of dangerous toys.

Its actually more Geth-like, technically, if they didn't have the hive mind, except being treated like slaves/pets/property instead of being genocide.

Majority of humanity can never treat Robots like equals.  Most will treat their slaves/pets/property well though.  If the Robot doesn't make assertions of being more then that...

Which is why we should dictate some set of laws dealing with the treatment of sentient/sapient AIs that everyone must follow. Like international human rights.

Actually, yeah, if I was 100% sure that something was a sociopath (as I would be with a man-made computer), I WOULD kill it. Especially if it was in a position to destroy me.

Eagle: Yeah, some humans ARE emotionless. They're called sociopaths or psychopaths (largely interchangeable terms by current vocabulary). But the machine still isn't alive. It is deeply capable of being malicious, it is capable of causing us harm, it is capable of defending itself and meeting its needs at any cost - but it is still not a sentient being, capable of being alive.

nbonaparte: Cool, but you're missing the point that we're arguing that the AI will never be alive enough to care about killing it, because it can never really be a sentient being.

And why can't an AI ever, ever be a sentient being?

Treat it like its intelligence deserves.

If it is a smart as a dog treat it like one and if it is a smart as a ant treat it like one. If it is as smart as a human treat it like something that is as smart as a human.
This I can work with. The computer you're posting from probably has processing power in the same order of magnitude as a mouse. If it was simulating a mouse brain, I would have no problem turning it off. That brings up the question, though, how do you treat an intelligence far greater than your own?

Very, very well.
« Last Edit: October 12, 2010, 05:25:30 pm by metime00 »
Logged
Live long if you can, and prosper by any means necessary.  Any means, Urist.  So pull that lever, or by Armok, I'll lock you outside come next siege.
He who plays with dwarves must take care that he does not become a dwarf.  And when you stare into DwarfFort, Dwarffort stares back into you.

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: Killing AI's
« Reply #91 on: October 12, 2010, 05:24:57 pm »

I will eat my boots if this morphs entirely into the PETA thread. Right here on webcam.
helps that I don't have boots.

Ironicly, googling PeTA artificial intelligence comes up with pages where PeTA suggests replacing animals with AI lol...

Actually, yeah, if I was 100% sure that something was a sociopath (as I would be with a man-made computer), I WOULD kill it. Especially if it was in a position to destroy me.

Eagle: Yeah, some humans ARE emotionless. They're called sociopaths or psychopaths (largely interchangeable terms by current vocabulary). But the machine still isn't alive. It is deeply capable of being malicious, it is capable of causing us harm, it is capable of defending itself and meeting its needs at any cost - but it is still not a sentient being, capable of being alive.

nbonaparte: Cool, but you're missing the point that we're arguing that the AI will never be alive enough to care about killing it, because it can never really be a sentient being.
Can you prove it requires anything more than deterministic events to by alive and human? Because if not, you can indeed simulate a human accurate to the subatomic level given enough computing power, thus creating a human level AI. Beyond that, it is merely squabbling over what unnecessary bits can be removed, simplified, or even upgraded, while still keeping a human level AI.

Anyway, if I had three doors, labeled one two and three. One if them has a car behind it and the others have nothing. You pick a door and then I open another door that has nothing behind it.

Do you switch to the other door or stay with your original?
Switch. 2/3 chance in other, 1/3 chance in original.
« Last Edit: October 12, 2010, 05:28:29 pm by alway »
Logged

Zangi

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #92 on: October 12, 2010, 05:29:16 pm »

Majority of humanity can never treat Robots like equals.  Most will treat their slaves/pets/property well though.  If the Robot doesn't make assertions of being more then that...
Wait a few generations.
After the equal rights robot revolution? The kill all humans robot revolution?  Or the leave us alone robot revolution?

It'll be hard to dislodge such sentiments.  They are not human, unlike humanities previous forays into the slave trade... hell, still happening now in some places...

@metime00
Laws from the beginning won't change that sentiment for a long time either.  But, I can say it'll be in the right direction... if it can pass...  Many people will still be prejudiced to the very idea.
Logged
All life begins with Nu and ends with Nu...  This is the truth! This is my belief! ... At least for now...
FMA/FMA:B Recommendation

Virex

  • Bay Watcher
  • Subjects interest attracted. Annalyses pending...
    • View Profile
Re: Killing AI's
« Reply #93 on: October 12, 2010, 05:29:41 pm »

Actually, yeah, if I was 100% sure that something was a sociopath (as I would be with a man-made computer), I WOULD kill it. Especially if it was in a position to destroy me.

Eagle: Yeah, some humans ARE emotionless. They're called sociopaths or psychopaths (largely interchangeable terms by current vocabulary). But the machine still isn't alive. It is deeply capable of being malicious, it is capable of causing us harm, it is capable of defending itself and meeting its needs at any cost - but it is still not a sentient being, capable of being alive.

nbonaparte: Cool, but you're missing the point that we're arguing that the AI will never be alive enough to care about killing it, because it can never really be a sentient being.
You're assuming silicon computers. what if instead one would create a sort of AI out of living mater, like a very advanced biocomputer which is maintained by it's own internal organs/specifically designed bacteria/other bio-mechanism. wouldn't that technically be a living being?
Logged

metime00

  • Bay Watcher
  • Adequate Dwarf Fortresser
    • View Profile
Re: Killing AI's
« Reply #94 on: October 12, 2010, 05:31:50 pm »

Majority of humanity can never treat Robots like equals.  Most will treat their slaves/pets/property well though.  If the Robot doesn't make assertions of being more then that...
Wait a few generations.
After the equal rights robot revolution? The kill all humans robot revolution?  Or the leave us alone robot revolution?

It'll be hard to dislodge such sentiments.  They are not human, unlike humanities previous forays into the slave trade... hell, still happening now in some places...

@metime00
Laws from the beginning won't change that sentiment for a long time either.  But, I can say it'll be in the right direction... if it can pass...  Many people will still be prejudiced to the very idea.

But if we make no effort because it will still be bad, that would be even worse. At least show the AIs that someone cares. And most strong AIs would be made in more developed, stable countries anyway.
« Last Edit: October 12, 2010, 05:48:33 pm by metime00 »
Logged
Live long if you can, and prosper by any means necessary.  Any means, Urist.  So pull that lever, or by Armok, I'll lock you outside come next siege.
He who plays with dwarves must take care that he does not become a dwarf.  And when you stare into DwarfFort, Dwarffort stares back into you.

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #95 on: October 12, 2010, 05:33:05 pm »

And frankly, I don't think greater-than-human AIs are about to be contained by measly human prejudices.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Grakelin

  • Bay Watcher
  • Stay thirsty, my friends
    • View Profile
Re: Killing AI's
« Reply #96 on: October 12, 2010, 05:33:41 pm »

If you created a 'biocomputer', it would be an experiment in genetics, not computing.

And simulation is not the same as 'being'. If you think it is, you are dangerous and should not be playing video games to start with. To make sure you're perfectly clear, you do not really get to drive a submarine in SH3. You do not actually drive a car really fast in GTA. The AI is not really alive and thinking.
Logged
I am have extensive knowledge of philosophy and a strong morality
Okay, so, today this girl I know-Lauren, just took a sudden dis-interest in talking to me. Is she just on her period or something?

Eagleon

  • Bay Watcher
    • View Profile
    • Soundcloud
Re: Killing AI's
« Reply #97 on: October 12, 2010, 05:34:08 pm »

Anyway, if I had three doors, labeled one two and three. One if them has a car behind it and the others have nothing. You pick a door and then I open another door that has nothing behind it.

Do you switch to the other door or stay with your original?
I'm part of PETA! People Eating Tasty Animals!
(why am I helping this along?)

Grakelin: There's one thing I'd like to clear up, and then I'm done. Sociopaths are not emotionless. That's not even a proper psychologidooder. By definition, sociopathy is a mental illness characterized by malformed social response - not understanding that other people are human, for instance. Lack of empathy is a different beast entirely, part of the social bond everyone is supposed to form and maintain, but which may fail in a number of interesting and dangerous ways. It doesn't mean they don't understand humor or want food or warmth, feel guilt (no really, my father is a 'sociopath' as you term it - he's cried about hitting a deer) greed, etc. An entirely emotionless (or one with no senses, or one that's grown up in a closet, for that matter) human would degrade into a vegetable. With no motivation or emotion, there is no use for maintaining skills and responses, and no thought.
Logged
Agora: open-source, next-gen online discussions with formal outcomes!
Music, Ballpoint
Support 100% Emigration, Everyone Walking Around Confused Forever 2044

alway

  • Bay Watcher
  • 🏳️‍⚧️
    • View Profile
Re: Killing AI's
« Reply #98 on: October 12, 2010, 05:36:27 pm »

It'll be hard to dislodge such sentiments.  They are not human, unlike humanities previous forays into the slave trade... hell, still happening now in some places...

@metime00
Laws from the beginning won't change that sentiment for a long time either.  But, I can say it'll be in the right direction... if it can pass...  Many people will still be prejudiced to the very idea.
Yeah... It would likely take centuries, at the very least, to integrate non-human species into the civilization of humanity to the point where they would be accepted as equals by a majority. But it would happen eventually, assuming of course neither side went nuts and tried to kill the other off.
Logged

Sir Pseudonymous

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #99 on: October 12, 2010, 05:36:50 pm »

This boils down to the people talking about realistic strong AI, and those talking about the ethical implications of "well like, what if we PERFECTLY REPLICATED A HUMAN, BUT AS, LIKE, A MACHINE, BUT WE LIKE MADE IT ACT LIKE IT WAS ORGANIC FOR SHIGGLES MAN???". Even the toned down "computer simulating the physical actions of a human brain" is rather silly. Such things would have no use outside of a lab. There is no practical point to creating artificial humans; any given application would be better served by a specialized system.


Further, even a strong AI would only care if it were abused/mistreated/killed if you fucking made it care in the first place, which would be an incomprehensibly stupid thing to do for a myriad of reasons, not least of which is there's no fucking point to doing so.

Also note that while, as opposed to what Grakelin is saying, one could eventually create a sapient artificial being, it would still be entirely pointless to make it as unstable and primitive as a human. (Primitive in this case referring to the fact that humans adapted to a much more primitive setting than they have created for themselves, and face a myriad of problems because of it; replicating those problems would be pointless to the point of absurdity) It would also not be an such a scale that any ethical ramifications could result, at least none that are being discussed. There's no point to having superintelligent androids running around interacting with humans, and doing so would fall under the aforementioned "replicating of current problems humans face".
Logged
I'm all for eating the heart of your enemies to gain their courage though.

Realmfighter

  • Bay Watcher
  • Yeaah?
    • View Profile
Re: Killing AI's
« Reply #100 on: October 12, 2010, 05:38:44 pm »

It becomes morally wrong to kill a AIthe moment it can go "Daddy don't kill me" (Just you know, less creepy and more heartbreaking) and the guy going to kill it stops.
Logged
We may not be as brave as Gryffindor, as willing to get our hands dirty as Hufflepuff, or as devious as Slytherin, but there is nothing, nothing more dangerous than a little too much knowledge and a conscience that is open to debate

nbonaparte

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #101 on: October 12, 2010, 05:39:42 pm »

This boils down to the people talking about realistic strong AI, and those talking about the ethical implications of "well like, what if we PERFECTLY REPLICATED A HUMAN, BUT AS, LIKE, A MACHINE, BUT WE LIKE MADE IT ACT LIKE IT WAS ORGANIC FOR SHIGGLES MAN???". Even the toned down "computer simulating the physical actions of a human brain" is rather silly. Such things would have no use outside of a lab. There is no practical point to creating artificial humans; any given application would be better served by a specialized system.


Further, even a strong AI would only care if it were abused/mistreated/killed if you fucking made it care in the first place, which would be an incomprehensibly stupid thing to do for a myriad of reasons, not least of which is there's no fucking point to doing so.

Also note that while, as opposed to what Grakelin is saying, one could eventually create a sapient artificial being, it would still be entirely pointless to make it as unstable and primitive as a human. (Primitive in this case referring to the fact that humans adapted to a much more primitive setting than they have created for themselves, and face a myriad of problems because of it; replicating those problems would be pointless to the point of absurdity) It would also not be an such a scale that any ethical ramifications could result, at least none that are being discussed. There's no point to having superintelligent androids running around interacting with humans, and doing so would fall under the aforementioned "replicating of current problems humans face".

Step 1: scan the brains of a bunch of engineers and computer scientist and the like. step 2: create a virtual environment for them. step 3: run the simulations at many times real time. Have them work out an AI that isn't based on the human brain.
Logged
A service to the forum: clowns=demons, cotton candy=adamantine, clown car=adamantine tube, circus=hell, circus tent=demonic fortress.

Virex

  • Bay Watcher
  • Subjects interest attracted. Annalyses pending...
    • View Profile
Re: Killing AI's
« Reply #102 on: October 12, 2010, 05:39:59 pm »

If you created a 'biocomputer', it would be an experiment in genetics, not computing.
It's an experiment in data manipulation, which is computing. It's a different medium, but it's still computing, just as [http://en.wikipedia.org/wiki/Unconventional_computing]Chemical computers, analog computers or quantum computers. Hell one could even make a computer out of neurons. At the core all of these technologies are about data manipulation and thus about computing.
Logged

smjjames

  • Bay Watcher
    • View Profile
Re: Killing AI's
« Reply #103 on: October 12, 2010, 05:40:38 pm »

It becomes morally wrong to kill a AIthe moment it can go "Daddy don't kill me" (Just you know, less creepy and more heartbreaking) and the guy going to kill it stops.

And then the AI goes on to kill said person.
Logged

Realmfighter

  • Bay Watcher
  • Yeaah?
    • View Profile
Re: Killing AI's
« Reply #104 on: October 12, 2010, 05:41:30 pm »

Dammit, I said less creepy.
Logged
We may not be as brave as Gryffindor, as willing to get our hands dirty as Hufflepuff, or as devious as Slytherin, but there is nothing, nothing more dangerous than a little too much knowledge and a conscience that is open to debate
Pages: 1 ... 5 6 [7] 8 9 ... 21