Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Would you ever consent to give a free-thinking AI civil rights(or an equivelant)?

Of course, all sentient beings deserve this.
Sure, so long as they do not slight me.
I'm rather undecided.
No, robots are machines.
Some people already enjoy too many rights as it is.
A limited set of rights should be granted.
Another option leaning torwards AI rights.
Another option leaning against AI rights.

Pages: 1 ... 5 6 [7] 8 9 ... 12

Author Topic: Would AI qualify for civil rights?  (Read 14409 times)

Techhead

  • Bay Watcher
  • Former Minister of Technological Heads
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #90 on: September 10, 2012, 06:19:10 pm »

And AI rights? I'm for giving animals civil rights =P

Hey, plants are intelligent too, you know. (I'm not kidding)
Hmm, you may have a point there.

We all create our own truths.  I say a sentient robot is a servent and not a being, somebody else says sentience or the wisdom to ask for rights makes them deserving.  What if I said they asked for rights because they are selfish, like a child?  The 'gimme complex'?  Would you still be so eager to give them their rights?
If someone took your post and replaced "sentient robot" with "negro", it would not sound out of place in the 19th century South.
Logged
Engineering Dwarves' unfortunate demises since '08
WHAT?  WE DEMAND OUR FREE THINGS NOW DESPITE THE HARDSHIPS IT MAY CAUSE IN YOUR LIFE
It's like you're all trying to outdo each other in sheer useless pedantry.

Flare

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #91 on: September 10, 2012, 08:27:19 pm »

Apologies for the length here, I tend to get carried away when I start typing...

As to the computer right now emulating it entirely, I would disagree. The mind is several levels far more complex than what we have in front of us at this point.

I never said we could do it right now.  This is all theoretical anyway, if it takes a computer from 100 years in the future, that's what it takes, but it's still possible.  :)

I don't actually think that would be possible as far as we understand computers right now- what its limitations are and what categorically a computer is.To say that in the future that computer will be able to produce consciousness is to posit that despite what we know this will be possible because technology. I think what this position ignores is that knowledge and technology doesn't always advance as we envision it to be. Not everything becomes easier the more we find out about it, in fact I would hazard a guess that the majority of what we discover makes our predictions of how a vein of technology unfolds moot.

Quote
To the point whether we would recognize it, I think you're missing the point. If such an event did take place, would you concede that these billions of people phoning each other up in this way constitute consciousness? The issue isn't whether the people at that moment recognize it as consciousness, rather it's a problem posed to you, as to whether you would recognize it as consciousness with the intention of the author being that this is ridiculous if you DO say that this method will result in a consciousness coming into being. On top of this, there's also the implication that if you do recognize this as something that grants consciousness, then shouldn't your computer also be subject to the same view? If not full human rights, perhaps the same rights as a dog or maybe just live stock.

Quote
Note that I used the word output here instead of consciousness.  Whether this mechanism...

I've read your response, and nitpicks aside, I think our disagreements comes to the issue of what consciousness is. The way you've described it doesn't seem to match up with mine, and since the majority of your post is built upon this I think we should clear this up first so we could understand each other.

For me, consciousness is something really, really strange. It's the concept of existing or perceiving oneself to exist in the most fundamental way. When I see something with my eyes, I'm not just computing it, I actually see stuff with my eyes, I have this sensation of experience that I know the electrical system or plumbing system in my house doesn't have. When I read a book I'm not simply computing the words on the page- I understand what is meant. I'm not merely associating words, although this happens too. I guess it's a little hard to describe it, but I'm quite sure that as a person who is also conscious, you would know what I'm talking about too. If you don't, I suggest you start panicking :P.

Now you've talked about halving and reducing consciousness and I have to admit I don't have any idea what you're talking about. One can reduce the faculty of a human being's ability to put together experience, but consciousness seems to be a trait whether you have it or not. A middle ground seems incomprehensible. A person might have fragmented experiences and can't make sense of them due to the lack of faculties, but this person can still be 100% conscious.

Back on consciousness, I know it has something to do with he physical structure between my ears, but I'm not sure this alone explains consciousness much less a bunch of transistors going off and on. In my view, one transistor going off and on due to some mechanism that senses the external inputs does not bring about consciousness. On the back of this, I think we're likely to agree that a lone transistor going on and off depending on the level of brightness the sensor picks up does not constitute consciousness. So if one transistor doesn't bring about consciousness, neither then should two, ten-thousand, or several trillion of them. There's no fundamental change, there's nothing added to the equation other than raw numbers. The base traits that the lone transistor has should be all there is to it, except that there's more of it, and unless we concede that this lone transistor going on and off does have consciousness, I think we're in a bit of a pickle when we say computers will have the ability to become conscious in the future.

Just to be clear however, I'm not saying we cannot create consciousness aside from biological reproduction. I just think that computers as they are, and as they are envisioned to develop, won't create consciousness. It might create a piece of machinery that might be indistinguishable from a human being, but it certainly won't be conscious so far as our understanding of what computing is. There might be some other piece of machinery to be developed that does hold consciousness however, it just doesn't seem to be a computer or a series of transistors going off and on.
Logged

Eagle_eye

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #92 on: September 10, 2012, 09:16:16 pm »

The same argument applies to neurons. A single neuron isn't conscious. Ten neurons aren't conscious. Where does it suddenly become conscious? That's the issue I have with the idea that the structure of the brain creates consciousness; there's no clear dividing point, no fundamental law we know of that says consciousness exists above this threshold.
Logged

EveryZig

  • Bay Watcher
  • Adequate Liar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #93 on: September 10, 2012, 10:28:48 pm »

The same argument applies to neurons. A single neuron isn't conscious. Ten neurons aren't conscious. Where does it suddenly become conscious? That's the issue I have with the idea that the structure of the brain creates consciousness; there's no clear dividing point, no fundamental law we know of that says consciousness exists above this threshold.
What is a human but a miserable pile of network of Chinese Boxes called 'cells'?

The question of boundaries occur even outside of AI when you ask when someone stops being a person. You have conscious people, beating-heart bodies with absolutely no neural activity, and cases in-between. How braindead is braindead?
Logged
Soaplent green is goblins!

ECrownofFire

  • Bay Watcher
  • Resident Dragoness
    • View Profile
    • ECrownofFire
Re: Would AI qualify for civil rights?
« Reply #94 on: September 11, 2012, 12:44:01 am »

The question of boundaries occur even outside of AI when you ask when someone stops being a person. You have conscious people, beating-heart bodies with absolutely no neural activity, and cases in-between. How braindead is braindead?

This is the problem of solipsism. It is impossible to tell if you are conscious, or you're just "acting" like it (you would be a philosophical zombie). In effect, it is (as far as current knowledge goes) impossible to distinguish between having a consciousness and not. I have a consciousness because "cogito ergo sum", but I have no idea if you do.

In fact, I might even be a philosophical zombie that's lying to you about myself having consciousness. Of course, the same applies to you...
Logged

Telgin

  • Bay Watcher
  • Professional Programmer
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #95 on: September 11, 2012, 08:58:14 am »

I don't actually think that would be possible as far as we understand computers right now- what its limitations are and what categorically a computer is.To say that in the future that computer will be able to produce consciousness is to posit that despite what we know this will be possible because technology. I think what this position ignores is that knowledge and technology doesn't always advance as we envision it to be. Not everything becomes easier the more we find out about it, in fact I would hazard a guess that the majority of what we discover makes our predictions of how a vein of technology unfolds moot.

Think of it like this: any physical system can be modeled and simulated by a computer given a sufficient understanding and enough processing power.  That goes all the way down to quantum mechanics, which is probably well below the level of necessary simulation for consciousness.  If nothing else, surely it's possible to simulate the neurons in a human brain, and thus the brain and all phenomena associated with it, right?  If not, why not?

Quote
To the point whether we would recognize it, I think you're missing the point. If such an event did take place, would you concede that these billions of people phoning each other up in this way constitute consciousness? The issue isn't whether the people at that moment recognize it as consciousness, rather it's a problem posed to you, as to whether you would recognize it as consciousness with the intention of the author being that this is ridiculous if you DO say that this method will result in a consciousness coming into being. On top of this, there's also the implication that if you do recognize this as something that grants consciousness, then shouldn't your computer also be subject to the same view? If not full human rights, perhaps the same rights as a dog or maybe just live stock.

I was wondering why you posed the question like that, this interpretation makes more sense.  The bottom line here is that if consciousness can be created by a computer system, then the medium in which it's created makes no difference.  As mentioned before, you could play World of Warcraft this way too if you want to, but it would be just as ridiculous.  This seems pretty ridiculous because of the speed of the communication and how discontinuous any consciousness would have to be, but I'll get to that in a moment.

The entire purpose of the argument of course is that you can't create consciousness in a computer system, but I don't think it's a strong enough argument for that.  We really need to find some way to test for consciousness before any such argument can really be made (which infuriatingly seems impossible to devise), since all this does is boil down to computing in a different medium.

As to whether or not my computer deserves rights... well, I don't think so.  Even if it was conscious that alone doesn't imply that it should deserve civil rights.  I think it's plausible to conceive of a conscious computer system that has no individuality and no ability to suffer.  If the system doesn't suffer negative emotion and nothing unique is lost when it is destroyed, there's probably no reason to protect it with civil rights.  This would be a pretty silly thing to create, but should in theory be possible.  If my computer is conscious on any level, I believe it would be like that: no reason to protect it since it can be completely replaced without loss (theoretically) and didn't suffer in its destruction.

Quote
I've read your response, and nitpicks aside, I think our disagreements comes to the issue of what consciousness is. The way you've described it doesn't seem to match up with mine, and since the majority of your post is built upon this I think we should clear this up first so we could understand each other.

For me, consciousness is something really, really strange. It's the concept of existing or perceiving oneself to exist in the most fundamental way. When I see something with my eyes, I'm not just computing it, I actually see stuff with my eyes, I have this sensation of experience that I know the electrical system or plumbing system in my house doesn't have. When I read a book I'm not simply computing the words on the page- I understand what is meant. I'm not merely associating words, although this happens too. I guess it's a little hard to describe it, but I'm quite sure that as a person who is also conscious, you would know what I'm talking about too. If you don't, I suggest you start panicking :P.

Actually, I don't think we really disagree at all.  I just sort of got sidetracked on tangents when I started relating how consciousness should be possible to create from a computing system.  When you start decomposing it, it gets a lot more muddy...

As a conscious being, yeah, I can relate to what you're talking about.  Similarly, I have a very hard time defining what it is and explaining how you might go about recreating such a thing with a computer.  I still don't see why it shouldn't be possible though.

To me there is a definite separation between the computing and conscious parts of a human mind, to the point where I think that in a computer system the consciousness itself is probably a separate layer or program.  In effect it's not so different from any other infinitely looping program in that it deals with inputs and produces outputs.  How you create subjective experience here is the hard part.  How do you actually get an entity to reside behind the cameras and auditory sensors of a robot?  I'm not sure, and nobody else is.  Our current software development strategies and systems are probably insufficient, but that shouldn't stop us in theory.

Quote
Now you've talked about halving and reducing consciousness and I have to admit I don't have any idea what you're talking about. One can reduce the faculty of a human being's ability to put together experience, but consciousness seems to be a trait whether you have it or not. A middle ground seems incomprehensible. A person might have fragmented experiences and can't make sense of them due to the lack of faculties, but this person can still be 100% conscious.

You just misunderstood what I was trying to say (I probably could have worded it better).  I was talking about halving the computational speed, not the actual "level" of consciousness.  The point I was trying to make is that as you slow down the speed of computation, it looks less and less like consciousness.  Getting back to the telephone scenario, it goes so slow that perceiving any consciousness there would be pretty tough, and it is in turn likely pretty different than we'd expect simply because its experiences would be so much slower.  In a way, I often wonder if consciousness is an illusion of sorts brought on by the apparent continuous nature of our perception.  That's hardly the whole puzzle, but maybe a small part of it.

Quote
Back on consciousness, I know it has something to do with he physical structure between my ears, but I'm not sure this alone explains consciousness much less a bunch of transistors going off and on. In my view, one transistor going off and on due to some mechanism that senses the external inputs does not bring about consciousness. On the back of this, I think we're likely to agree that a lone transistor going on and off depending on the level of brightness the sensor picks up does not constitute consciousness. So if one transistor doesn't bring about consciousness, neither then should two, ten-thousand, or several trillion of them. There's no fundamental change, there's nothing added to the equation other than raw numbers. The base traits that the lone transistor has should be all there is to it, except that there's more of it, and unless we concede that this lone transistor going on and off does have consciousness, I think we're in a bit of a pickle when we say computers will have the ability to become conscious in the future.

I think Eagle_Eye covered this pretty well.  Adding more transistors doesn't fundamentally change a processor, but when you get enough it can certainly do more stuff.  :)  There are a minimum number needed to create a binary adder, for example, and once you get enough now the processor can add if it's built correctly.

Quote
Just to be clear however, I'm not saying we cannot create consciousness aside from biological reproduction. I just think that computers as they are, and as they are envisioned to develop, won't create consciousness. It might create a piece of machinery that might be indistinguishable from a human being, but it certainly won't be conscious so far as our understanding of what computing is. There might be some other piece of machinery to be developed that does hold consciousness however, it just doesn't seem to be a computer or a series of transistors going off and on.

To me this all comes back to the start of my post where I said that anything can be simulated by a computer, even by modern standards (provided the right software and enough processing power).  The biggest fundamental difference between a transistor and neuron is the fact that transistors operate in clear timesteps (usually anyway) and only at on and off states.  Given enough of them in the right configuration you can approximate continuous functions and ranges, which effectively becomes a neuron, so even that doesn't seem to be a stumbling block in theory.
Logged
Through pain, I find wisdom.

pisskop

  • Bay Watcher
  • Too old and stubborn to get a new avatar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #96 on: September 11, 2012, 09:11:31 am »

We all create our own truths.  I say a sentient robot is a servent and not a being, somebody else says sentience or the wisdom to ask for rights makes them deserving.  What if I said they asked for rights because they are selfish, like a child?  The 'gimme complex'?  Would you still be so eager to give them their rights?

Does anyone ever do anything not selfish?
As in, something that benefits them in no way and also makes them feel the worst possible out of their choices? I don't really see it much, if ever. When people talk about selfish, they normally mean the sacrifice of image among others as well as self image in order to gain material property.

Fair enough.  Yet one can want imply because another has.  In fact that is a driving force behind history (envy).





And AI rights? I'm for giving animals civil rights =P

Hey, plants are intelligent too, you know. (I'm not kidding)
Hmm, you may have a point there.

We all create our own truths.  I say a sentient robot is a servent and not a being, somebody else says sentience or the wisdom to ask for rights makes them deserving.  What if I said they asked for rights because they are selfish, like a child?  The 'gimme complex'?  Would you still be so eager to give them their rights?
If someone took your post and replaced "sentient robot" with "negro", it would not sound out of place in the 19th century South.

I should not even dignify this.  This is an attempt to supercede logic by pointing out points you disagree with.  These are vaguely connected topics, blacks and robots.  Rights are the only connecting point here.  If you have a proper retort then out with it, otherwise this is simply slander the same politcal ads sling.
Logged
Pisskop's Reblancing Mod - A C:DDA Mod to make life a little (lot) more brutal!
drealmerz7 - pk was supreme pick for traitor too I think, and because of how it all is and pk is he is just feeding into the trollfucking so well.
PKs DF Mod!

kaijyuu

  • Bay Watcher
  • Hrm...
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #97 on: September 11, 2012, 09:23:02 am »

Quote
As to whether or not my computer deserves rights... well, I don't think so.  Even if it was conscious that alone doesn't imply that it should deserve civil rights.  I think it's plausible to conceive of a conscious computer system that has no individuality and no ability to suffer.  If the system doesn't suffer negative emotion and nothing unique is lost when it is destroyed, there's probably no reason to protect it with civil rights.  This would be a pretty silly thing to create, but should in theory be possible.  If my computer is conscious on any level, I believe it would be like that: no reason to protect it since it can be completely replaced without loss (theoretically) and didn't suffer in its destruction.
Heh, this reminds me of that cow from Hitchhiker's Guide. The one that was genetically engineered to want to be eaten, as eating things that didn't was considered inhumane.

If we can control what a consciousness thinks is "pain" or "pleasure," then we can manipulate it into wanting what we want. Serving us can become its pleasure, literally. Making us happy would make it happy. It would be the ultimate in brainwashed slavery.

Is that okay? I dunno. I'd prefer nothing be made sophisticated enough to be considered alive, myself. My toaster should remain free of moral consideration because it stays just a toaster, not because I can rationalize away its sentience.

Quote
I should not even dignify this.  This is an attempt to supercede logic by pointing out points you disagree with.  These are vaguely connected topics, blacks and robots.  Rights are the only connecting point here.  If you have a proper retort then out with it, otherwise this is simply slander the same politcal ads sling.
His argument is absolutely valid, and ignoring it would be quite ridiculous. The comparison is legitimate.

And I should point out, rhetoric is something you should be very aware of. Don't make the same arguments made by those you disagree with, with just the target switched around. It's a telltale sign of your logic being bullshit, as it's the very definition of a double standard.
« Last Edit: September 11, 2012, 09:27:31 am by kaijyuu »
Logged
Quote from: Chesterton
For, in order that men should resist injustice, something more is necessary than that they should think injustice unpleasant. They must think injustice absurd; above all, they must think it startling. They must retain the violence of a virgin astonishment. When the pessimist looks at any infamy, it is to him, after all, only a repetition of the infamy of existence. But the optimist sees injustice as something discordant and unexpected, and it stings him into action.

Telgin

  • Bay Watcher
  • Professional Programmer
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #98 on: September 11, 2012, 09:56:07 am »

Is that okay? I dunno. I'd prefer nothing be made sophisticated enough to be considered alive, myself. My toaster should remain free of moral consideration because it stays just a toaster, not because I can rationalize away its sentience.

I've had mixed feelings on this as of late.  I still feel that we should aspire to create artificial consciousness, including human level cognition, but maybe only as a curiosity.  I think if we do end up creating conscious robots with human like intellect, we probably shouldn't create them as dedicated slaves like that, but more as equals.  Good luck getting that sort of thing to be universally accepted though, and there really is no great reason for it when as you say it's easy enough to create them to like servitude.

And I agree about the toaster, though.  I think that artificial consciousness should probably remain limited to robots whose purpose it is to interact with humans on a social level and would benefit considerably from such a thing.  Toasters should just make toast.

And perhaps we shouldn't create general purpose conscious robots.  Maybe it's not worth the risk and trouble.
Logged
Through pain, I find wisdom.

Lagslayer

  • Bay Watcher
  • stand-up philosopher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #99 on: September 11, 2012, 10:03:06 am »

There are far too many variables involved for me to clearly express my opinion. I put my answer as "undecided", but I may be tempted to put it as "other option leaning against machines", because, as a human, I feel humans are more important, so it serves as a tiebreaker.

Starver

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #100 on: September 11, 2012, 11:04:04 am »

I've used the phrase before, and I think it bears repeating, "Simulated Intelligence".  It's probably a more accurate description, in the first place, to consider a lot of 'AI' examples as 'SI', in the first place and even while I'm not wishing to downgrade true 'machine intelligences'[1] to merely electronic/whatever 'thinking machines', something that is recognisably merely doing 'Chinese Box' work is most definitely not worthy of rights[2].

Even under this restriction, there are still a lot of possible different interpretations that individuals may care to (or purposefully invoke for mischief) to equate an SI with a sentient 'slave', of course.  I would probably have a whole different viewpoint of an Intelligence that was a descendant of self-replicating Intelligences, even if I adjudged the original progenitor to be our creation.  Very Old Testament biblical, though[3].

I see far fewer problems in assuming (where prejudices are removed) that creatures (from other human races, in the 19thC example, through to dolphins and dogs and even plants) are as worthy of 'our' rights as we are, given that we have a common ancestor.  Otherwise we come up against the paradox of assuming our 'humanity' (for the sakes of the rights we wish to convey) goes back as far as a given ancestor (or as far up as a given ancestor of the alternate line) but does not apply to their own parents (or offspring, on the way back up again).  In a "how many grains of sand make a dune" kind of way, there's no real answer, once you start looking at the addition and removal of individual grains.


What might be really tricky is dealing with known biological intelligences transplanted/copied into artificial housings[5], and in the recognition of intelligences that are life-forms using a totally different 'biological' construct and not from our own Tree Of Life (e.g. an actual silicon-based life, or a plasma-based one, where the corporeal presence and even the time-scales of interaction may be so different).  In one case we may have to recognise them as true individuals separate from the (pre-existing, and perhaps now non-existence) original to whom we had granted them rights, as opposed to a... I keep coming back to this word... 'toy'.  In the latter, the difficulty is establishing that they are equivalent.  And establishing our equivalence to them[6]...


Oh yeah, and most recent poster reminds me, I've not voted yet, and probably will not for similar reasons as they've plumped for the "Undecided", only I remain even more aloof from giving an actual answer (even if it is currently undecided, in all essence).


[1] And leaves the definition and the creation thereof as a still open question

[2] Protections from mis-use, yes, which are more the rights of the 'owners'.

[3]
Spoiler: In fact... (click to show/hide)
And I really don't need to try any to push that analogy out far further in that direction, but I'll stop now for the sake of not derailing us into an actual Religion argument... Just saying.

[4] Might not be their fault...

[5] Or even "Partials", from a series of books that I forget the name of/author, but are a "dedicated subset of id/ego/super-ego" that a person creates to perform various intellectual chores.

[6] Is it a Brian Aldiss short story where humans, (space-)shipwrecked on a generally benign alien world without all the accoutrements of civilisation, happen to be picked up by another alien race who assume they are native creatures of no extraordinary intelligence and caged in a zoo.  Until they exhibit a certain trait (which I won't spoil) that makes their keepers realise that they're actually equals?  And how long was that question?
Logged

Zrk2

  • Bay Watcher
  • Emperor of the Damned
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #101 on: September 11, 2012, 11:18:42 am »

Quote
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
Unless we know exactly what differentiates a being with consciousness from one without, the Turing test is as good as it gets.

Also, if there is a difference, but the difference cannot be perceived or measured, does the difference really matter?

Of course.  All difference means something.  The butterfly effect, the real butterfly effect is always in play

But the difference has no results on anything. Nobody can tell that it isn't self aware, maybe it can't even tell that it isn't self aware (if that makes sense). Somebody would feel equally guilty about destroying it while it bargained for them to stop, people would react the same. The point of the difference is that the difference is completely imperceptible. It has no effect on anything besides its own truth.

We can't get imperical evidence but we can logically deduce the impossibility of a machine being sentient.
Logged
He's just keeping up with the Cardassians.

kaijyuu

  • Bay Watcher
  • Hrm...
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #102 on: September 11, 2012, 11:20:49 am »

We can't get imperical evidence but we can logically deduce the impossibility of a machine being sentient.
I'm not sure how you came to that conclusion, as there exist machines that are sentient: Us.
Logged
Quote from: Chesterton
For, in order that men should resist injustice, something more is necessary than that they should think injustice unpleasant. They must think injustice absurd; above all, they must think it startling. They must retain the violence of a virgin astonishment. When the pessimist looks at any infamy, it is to him, after all, only a repetition of the infamy of existence. But the optimist sees injustice as something discordant and unexpected, and it stings him into action.

Graknorke

  • Bay Watcher
  • A bomb's a bad choice for close-range combat.
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #103 on: September 11, 2012, 11:22:24 am »

Quote
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
Unless we know exactly what differentiates a being with consciousness from one without, the Turing test is as good as it gets.

Also, if there is a difference, but the difference cannot be perceived or measured, does the difference really matter?

Of course.  All difference means something.  The butterfly effect, the real butterfly effect is always in play

But the difference has no results on anything. Nobody can tell that it isn't self aware, maybe it can't even tell that it isn't self aware (if that makes sense). Somebody would feel equally guilty about destroying it while it bargained for them to stop, people would react the same. The point of the difference is that the difference is completely imperceptible. It has no effect on anything besides its own truth.

We can't get imperical evidence but we can logically deduce the impossibility of a machine being sentient.

But the point of the entire though thing was that its sentience could have no effect on how it behaves, and that would there then even be a reason to mark a difference? If its effect cannot be measured, does it matter?
Logged
Cultural status:
Depleted          ☐
Enriched          ☑

ECrownofFire

  • Bay Watcher
  • Resident Dragoness
    • View Profile
    • ECrownofFire
Re: Would AI qualify for civil rights?
« Reply #104 on: September 11, 2012, 01:23:10 pm »

We can't get imperical evidence but we can logically deduce the impossibility of a machine being sentient.
I'm not sure how you came to that conclusion, as there exist machines that are sentient: Us.
Solipsism, man. You can't prove to me that you're sentient and not just acting like it (a philosophical zombie).
Logged
Pages: 1 ... 5 6 [7] 8 9 ... 12