Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Would you ever consent to give a free-thinking AI civil rights(or an equivelant)?

Of course, all sentient beings deserve this.
Sure, so long as they do not slight me.
I'm rather undecided.
No, robots are machines.
Some people already enjoy too many rights as it is.
A limited set of rights should be granted.
Another option leaning torwards AI rights.
Another option leaning against AI rights.

Pages: 1 ... 7 8 [9] 10 11 12

Author Topic: Would AI qualify for civil rights?  (Read 14201 times)

Graknorke

  • Bay Watcher
  • A bomb's a bad choice for close-range combat.
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #120 on: September 12, 2012, 07:24:35 am »

That's not really the best analogy there. Unless you're capable of changing it at an atomic level to create various chemicals that would be needed to have a living animal again, you're never going to be able to make an alive animal from a dead plant.

But the brain really does work on the same level as a computer, it has nerve cells whose output is purely based off an on or off basis.
Logged
Cultural status:
Depleted          ☐
Enriched          ☑

pisskop

  • Bay Watcher
  • Too old and stubborn to get a new avatar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #121 on: September 12, 2012, 09:11:30 am »

That's not really the best analogy there. Unless you're capable of changing it at an atomic level to create various chemicals that would be needed to have a living animal again, you're never going to be able to make an alive animal from a dead plant.

But the brain really does work on the same level as a computer, it has nerve cells whose output is purely based off an on or off basis.

Nerve cells are never [just] on or off.  In fact, more and more research indictates that nerves cells are not the only cells capable of transmitting the data within your mind.

And though rather simplistic, I found it quite easy to understand, and as thus still a valid model for use...

As to the other debate...

edit:

Logged
Pisskop's Reblancing Mod - A C:DDA Mod to make life a little (lot) more brutal!
drealmerz7 - pk was supreme pick for traitor too I think, and because of how it all is and pk is he is just feeding into the trollfucking so well.
PKs DF Mod!

Telgin

  • Bay Watcher
  • Professional Programmer
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #122 on: September 12, 2012, 09:15:40 am »

I think there are some steps missing in your deduction, we know that the brain has something to do with consciousness, but I don't think we can confidently say what is it about the brain that causes consciousness. I don't think the argument that holds whenever there are brains there is consciousness, therefore the two must necessarily be caused by the other, or at least, the physical structure of the brain itself is the cause and origin of consciousness.

I look at it this way: if it's not the structure of your brain that causes consciousness, it must be magic and thus we have no hopes of repeating it.  I don't believe in magic though, so I say it's down to the structure of the brain (which is nothing but a computer in its own sense).

Quote
In fact I think this point sums up the whole discussion pretty well. We don't know what causes consciousness, therefore I don't think we should say computers definitely will gain consciousness at some point in their development, though of course I take a stronger position than this in saying that they never will. We can simulate a machine as a seamless human being living in a human society, but whatever the means that is used to do so, does not mean that said means will avail itself to every facet of human existence, in this particular instance consciousness. It is possible to make a human like object from nothing but metal, string, and power box. This does not mean however, that even if the central ball of string that handles inputs and outputs works on the principles of binary, that the ball of string in its head will contain consciousness. Likewise, no matter how big you make this ball of string, I (maybe not you) would be understandably hard-pressed to claim that consciousness would arise from this clump of knots.

Saying anything definitively about whether we can create consciousness is of course taking a leap of faith since we only have one example to work from.  The fundamental part of my entire argument is that consciousness has to arise from the flow of information in some form, whether this is high level information of the form 2 + 2 = 4 or implied information such as the transfer of energy between molecules in a neuron's mitochondria.  If this doesn't cause consciousness, or if recreating this information flow doesn't cause consciousness, then the only other explanation is magic, and I don't believe in magic.

Heck, I'm hard pressed to believe that a brain can create consciousness when I really start thinking about it.  Nevertheless, it does do this, and if it does do this as a pure function of information transfer, then in theory it should be possible to create it with any computing medium.

Quote
In theoretical mathematics, there is a very important distinction made when you create a symbol for something. The symbol or name that represents the object you're describing, and the thing you're actually describing. All in all, I think you're begging the question, the issue I'm criticizing is computers being conscious. When you say that this is entirely possible because we're just going to model is to assume that it is already possible before demonstrating it. How do you confidently say you can simulate it when you have no idea how it arises? For someone to simulate consciousness in a model of brain, that person must have already known enough of how the brain works in simulating consciousness to do so. In fact, it must be assumed possible in this hypothetical world this person lives in. QED, you're begging the question.

My entire argument hinges on the idea that while we don't understand well enough right now how a brain works to recreate one, it should be possible to do so once we do understand this.  The entire point of saying that we can simulate a brain is to break down the seemingly impossible task of understanding every last nuance of how a brain works in its entirety at any given moment.  Instead, all you need to do is model the system in which it works (that is, how the neurons connect and share information), fill that in with a real scenario (a brain map from a human), then let it run.  Its behavior should theoretically be indistinguishable from the original human.

If this doesn't create consciousness, it only leaves two possibilities: that consciousness can only be created by neurons (or presumably other organic matter), or it does indeed have a "magical" component somehow.

If it turns out that consciousness only arises from neurons, then this would have to be due to their internal structure for one reason or another, which we could in turn simulate.  We can continue going down the chain until we're simulating quarks, and if we don't produce consciousness at that point then consciousness must only arise from something magical.

Quote
I don't think you understand the implications of what consciousness endows to a being. Given a system that isn't conscious and is programed to ask for rights against one that is conscious and is programmed to ask for rights. The latter will have really meant it. It will have meant it in the same manner as you and I asking for rights regardless if it can intellectually be capable of anything else.

I understand this, but I maintain that consciousness on its own is not the end-all be-all of whether such a thing deserves rights.  It's conceivable that you could build a conscious system that had no emotion, for example.  You can't be mean to it, since it doesn't feel emotion.  If it also does not learn anything, then it is not an irreplaceable individual.  Destroying it means nothing aside from the loss of property.

Quote
Could you explain what it is in theory that supports your claim? I don't actually see what in any computing theory that would suggest consciousness be possible to produce out of a series of transistors (and neither have I read any compelling explanation accurately depicting one originating out of a clump of neurons either for that matter).

That's pretty much the basis of my argument: we see no reason why consciousness should arise out of neurons either, so with a bit of a stretch of the imagination it seems that it should be possible to do with other systems.  Neurons and transistors can compute the same types of functions, so it seems logical that they can both do the same things if arranged properly.

Quote
I would disagree here on the basis that you don't seem to be talking about consciousness any more when you talk about computational speed. Mathematical computations to the best of my knowledge is just dealing with inputs and outputs. You may theoretically slow down everything in a brain to see what this person will do in the next few seconds based on the position of the neurons and the chemicals in them, but I don't think you can actually see consciousness, IE, the person experiencing doing these things.
On top of this, I don't actually know what speeding stuff up with have generate any appreciable difference. To be frank, you're saying that for consciousness to come about, we just have to speed up several trillion transistors fast enough. I don't think I need to point out just how much more of an explanation is needed to make this work, particularly the point where speeding something up and you'll get something entirely new out of something.

No, that wasn't really the point I was trying to drive home.  I was speaking all about why it seems ridiculous, and the computing speed is to me at least part of it.  The relation of consciousness to computing speed isn't critical, but I do think it's important to consider.  It may be a necessary (or near necessary) condition, but it is clearly not sufficient.

In this case I believe the speed of thought, as it were, is going to be important to have a consciousness anything like ours.  If it runs considerably slower, then the way it perceives the world and reacts to it is likely to be a bit alien to us.  Slowing the inputs down to an equivalent speed of course removes this distinction entirely.

Quote
You could make your transistors more complex, but I don't really think this is a particularly good argument that computers can gain consciousness. Claiming that when we make it more complex it will accomplish something of another logical order seems to be lacking a lot of explanation in the middle about how this complexity makes X possible. Why yes, as it gets more complex, it will get new parts and perform new functions. Why do you know that one of these future functions is the one that's being doubted?

Complexity is again a necessary but not sufficient condition.  It's the same with any modern non-trivial program.  Adding small pieces together can rapidly assemble something that is much more than the sum of its parts.

I guess I don't see how transistors and neurons are fundamentally different here.  Alone they're completely useless, but when you put them together they can do amazing things.  The way they do computation is different, but that's a matter of transforming their functions into the other form.

Quote
Suppose I made a notch in my door. As I put more notches in it, it will become more complex. It will gain new parts and new features that the previous door didn't have. Now suppose I said then that because the door has the ability to gain new parts and features as I keep on adding notches to it, that it will have the ability to turn into the real life Jimi Hendrix if I added enough notches into it. This is not a good argument. This is an argument that the door and the computer will in the future will likely develop new advancements, but I don't think this argument actually leads to something that doors and computers have never exhibited any inclination in its history of development.

A door, no matter how many notches you add to it, can't become the real life Jimi Hendrix.  It's not a sufficiently complicated system to do something like that.  For that matter, you can't do that with metal either.  Jimi Hendrix isn't made of metal.  A robot Jimi Hendrix is not Jimi Hendrix.

That's also not the point.  The point is that you can make an equivalent Jimi Hendrix robot (equivalence here being whatever you want, but I'm going with equivalence being his mind).


Quote
Nerve cells are never [just] on or off.  In fact, more and more research indictates that nerves cells are not the only cells capable of transmitting the data within your mind.

You can convert digital values to analog signals, so that in theory shouldn't matter.
Logged
Through pain, I find wisdom.

pisskop

  • Bay Watcher
  • Too old and stubborn to get a new avatar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #123 on: September 12, 2012, 09:21:37 am »

Yes, you absolutely are obligated to provide for those people. No one has any more right to safety and comfort than another person. As for denying the natives a right to a better life, that is exactly what early immigrants to the Americas did. Should we simply return the land to the few survivors and ship off?

This is too much.  This is no better than somebody comparing me to a slave driver.  This is refering to a time before the advent of true science.  We did not know what we did was wrong on a sociological scale, because there was no sociology.  But that's simply for starters.
The implications of my answering this question fully will drag in much more fundamental questions than human rights.  Answering this will expose my view on the competency of humans both as an individual and as a whole.  This calls into question your views on what is morally right, and how valid the application or possession of force is.

I would ask you to please state your point of view in a logical manner.  Do not take for granted that humanity is entitled to such a thing as rights.  I would also ask you to justify this:

It's not even remotely the same culture as medieval islamic societies. The only thing the societies have in common is Islam.

If you think that it's acceptable to deny basic rights to half of your population on the basis of the structure of one chromosome, something is very, very wrong with your thinking. Moral relativism is far more dangerous than allowing cultural diffusion- it can be used to justify anything. For a more extreme example, in Nazi Germany, murdering jews was acceptable and encouraged. That was the dominant culture. Does that make it morally acceptable?

Do you have a list of proofs or a source I could look into?  I have no clue how accurate this is, because I have not studied it myself yet.  I may now, however.  [edit]The medieval Islamic culture bit.[/edit]

I have presented my own arguement as logically, fairly , and evenly as I could on the spot.  I feel as though you are doing nothing but trying to badger me into submission, and that does not make for a productive debate.   I want, thrive on debate.  We all could.  We learn new things and more accurate means to define ourselfs from a philosophical standpoint.  But, all debate must be logical and present the appropriate facts or we have nothing but squabling.

Read: Women in Saudi Arabia are too inexperienced in equal rights to get rights.

This is nothing short of an attack.  I will not answer this save to say if this is all you have gleaned from my writing than proper formal education has a long way to go.

edit:  I meant to add that to my prior post.


p-edit:
Ignoring that America drains bone-dry the essential resources and workforce of many 3rd world countries, there's virtually no chance whatsoever for a conscious, communicating AI that has not been raised by human contact to emerge. Normal level intelligence in the sense of creativity, complex problem-solving, and intuition is impossible without some sort of social and worldly connection. The alternative is to teach everything they have to know before they're turned on, which is exceedingly impractical. A hard-AI will most likely require comfort and physical contact, eyes to see with, some form of hearing, and a contrasting motivation like hunger to create some internal drive to learn.

Given that, they'll have a cultural identity of their own from their interactions with their caretakers. So they may indeed think of themselves as American, or Mexican, or whatever. It's puzzling that you place so much importance on cultural identity being the foundation for a person's rights in America, when the founding principles of our country demands the contribution of other ideas and cultures to avoid reverting to imperialism. You can never be sure of the motives of another human being, let alone a robot - your argument here precludes giving rights to people with mental illness such as schizophrenia and anti-social behavior, even though they may still be productive members of society.

Well stated.  I merely feel overwhelmed by the influx of new immigrants, many of whom are illegal and place no value in learning the history.  They claim no true ties to the country, and the second that shtf they would gladly tell ICE that they are illegal.  Essentially I feel they are being given a key to the candy factory, and they could easily eat all the chocolate, walk on out and hand the key over.

I fell I must say that I do have different opinions than some Americans.  I want a strong family unit.  I want rigid social standards.  I want discipline and I want to see individuals facing responsibility with pride.
« Last Edit: September 12, 2012, 09:33:20 am by pisskop »
Logged
Pisskop's Reblancing Mod - A C:DDA Mod to make life a little (lot) more brutal!
drealmerz7 - pk was supreme pick for traitor too I think, and because of how it all is and pk is he is just feeding into the trollfucking so well.
PKs DF Mod!

pisskop

  • Bay Watcher
  • Too old and stubborn to get a new avatar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #124 on: September 12, 2012, 09:33:48 am »

...dp...
Logged
Pisskop's Reblancing Mod - A C:DDA Mod to make life a little (lot) more brutal!
drealmerz7 - pk was supreme pick for traitor too I think, and because of how it all is and pk is he is just feeding into the trollfucking so well.
PKs DF Mod!

EveryZig

  • Bay Watcher
  • Adequate Liar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #125 on: September 12, 2012, 11:23:24 am »

Well stated.  I merely feel overwhelmed by the influx of new immigrants, many of whom are illegal and place no value in learning the history.  They claim no true ties to the country, and the second that shtf they would gladly tell ICE that they are illegal.  Essentially I feel they are being given a key to the candy factory, and they could easily eat all the chocolate, walk on out and hand the key over.
What, morally, is the difference between an immigrant and a native? Their presence is against the law, but that alone does not make a thing immoral, as there can be unjust laws. They might not place value on learning our history, but a large portions of our own citizens have an astonishing ignorance of the same history, and in some cases would not pass an entrance test. They care for making a living for their family far more than which country they do it in, but this trait is again shared by many citizens. They do not pay taxes like citizens, but they have no opportunity to do so without their livelihood being destroyed. The only difference I can see that applies to all cases is that they were born in a (sometimes very slightly) different location, which seems to me as irrelevant to morality as height or hair color.
Logged
Soaplent green is goblins!

Eagle_eye

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #126 on: September 12, 2012, 04:54:42 pm »

Yes, you absolutely are obligated to provide for those people. No one has any more right to safety and comfort than another person. As for denying the natives a right to a better life, that is exactly what early immigrants to the Americas did. Should we simply return the land to the few survivors and ship off?

This is too much.  This is no better than somebody comparing me to a slave driver.  This is refering to a time before the advent of true science.  We did not know what we did was wrong on a sociological scale, because there was no sociology.  But that's simply for starters.
The implications of my answering this question fully will drag in much more fundamental questions than human rights.  Answering this will expose my view on the competency of humans both as an individual and as a whole.  This calls into question your views on what is morally right, and how valid the application or possession of force is.

I would ask you to please state your point of view in a logical manner.  Do not take for granted that humanity is entitled to such a thing as rights.  I would also ask you to justify this:

It's not even remotely the same culture as medieval islamic societies. The only thing the societies have in common is Islam.

If you think that it's acceptable to deny basic rights to half of your population on the basis of the structure of one chromosome, something is very, very wrong with your thinking. Moral relativism is far more dangerous than allowing cultural diffusion- it can be used to justify anything. For a more extreme example, in Nazi Germany, murdering jews was acceptable and encouraged. That was the dominant culture. Does that make it morally acceptable?

Do you have a list of proofs or a source I could look into?  I have no clue how accurate this is, because I have not studied it myself yet.  I may now, however.  [edit]The medieval Islamic culture bit.[/edit]

I have presented my own arguement as logically, fairly , and evenly as I could on the spot.  I feel as though you are doing nothing but trying to badger me into submission, and that does not make for a productive debate.   I want, thrive on debate.  We all could.  We learn new things and more accurate means to define ourselfs from a philosophical standpoint.  But, all debate must be logical and present the appropriate facts or we have nothing but squabling.


I believe that whatever creates the most happiness for the greatest number of conscious entities for the longest period of time is best. I'm aware that we can't objectively measure happiness, but I believe that that is simply a limitation of neuroscience, not something that is fundamentally impossible, and that in the meantime, we can certainly determine that some things are bad, and some things are good. Hunger is a bad experience. Pain is a bad experience. We may not be able to determine with certainty which actions will produce the most happiness, but we can get it fairly close. If you don't think happiness in others is inherently good, then you can justify it selfishly as well: If everyone behaved that way, you personally would be almost guaranteed to have a comfortable life.

I don't have any specific evidence to cite for the medieval islamic thing at the moment, as it's been a long time since I've learned about that period, but I will look.
Logged

Techhead

  • Bay Watcher
  • Former Minister of Technological Heads
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #127 on: September 12, 2012, 10:11:23 pm »

I think that we are starting to drift in the topic of conversation. The question posed in the OP asks:
Would an intelligent, freethinking AI, with or without mobile capacity, qualify for indiscriminite equal rights?
Since the premise of the question includes an "intelligent, freethinking AI", arguing over whether an AI could possibly be freethinking or intelligent distracts from the matter at hand.
Logged
Engineering Dwarves' unfortunate demises since '08
WHAT?  WE DEMAND OUR FREE THINGS NOW DESPITE THE HARDSHIPS IT MAY CAUSE IN YOUR LIFE
It's like you're all trying to outdo each other in sheer useless pedantry.

Eagle_eye

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #128 on: September 12, 2012, 10:19:48 pm »

There's nothing wrong with a thread getting derailed, if the conversation remains interesting.
Logged

Techhead

  • Bay Watcher
  • Former Minister of Technological Heads
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #129 on: September 12, 2012, 10:25:37 pm »

I wouldn't mind as much if the atmosphere was a little more civil. I didn't say anything when the first-contact thread turned into dolphin genocide, for instance. It's just that the derail is starting to become more of an argument than a conversation.
Logged
Engineering Dwarves' unfortunate demises since '08
WHAT?  WE DEMAND OUR FREE THINGS NOW DESPITE THE HARDSHIPS IT MAY CAUSE IN YOUR LIFE
It's like you're all trying to outdo each other in sheer useless pedantry.

Flare

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #130 on: September 13, 2012, 01:09:50 am »

I think there are some steps missing in your deduction, we know that the brain has something to do with consciousness, but I don't think we can confidently say what is it about the brain that causes consciousness. I don't think the argument that holds whenever there are brains there is consciousness, therefore the two must necessarily be caused by the other, or at least, the physical structure of the brain itself is the cause and origin of consciousness.

I look at it this way: if it's not the structure of your brain that causes consciousness, it must be magic and thus we have no hopes of repeating it.  I don't believe in magic though, so I say it's down to the structure of the brain (which is nothing but a computer in its own sense).

Maybe I haven't been as clear as I should have been. I have conceded that the brain has something to do with consciousness, but to say that we only have to replicate the neuro-interconnections of the brain I think is jumping the gun based on what we know about consciousness and our brain structure. The inter-connectivity of the neurons might be entirely subsidiary to how consciousness works. Breaking down consciousness into one part of the structure where it's housed doesn't seem to me to be a particularly good idea, especially when there are other explanations that compete against each other.

Regarding the claim that it either is the very structure of the brain that causes consciousness or it being magic, I think you're posing a false dichotomy. I don't think there is sufficient enough understanding of how neurons and computational systems work that we can equate them to each other at the level of human brain function. At the level where information is passed around and saved, sure, but much more than this then I think we're simply going on hypothesis. Other than these options, I think there is a more justified one- we don't know what causes
consciousness or even what it is.

Quote
My entire argument hinges on the idea that while we don't understand well enough right now how a brain works to recreate one, it should be possible to do so once we do understand this.  The entire point of saying that we can simulate a brain is to break down the seemingly impossible task of understanding every last nuance of how a brain works in its entirety at any given moment.  Instead, all you need to do is model the system in which it works (that is, how the neurons connect and share information), fill that in with a real scenario (a brain map from a human), then let it run.  Its behavior should theoretically be indistinguishable from the original human.

If this doesn't create consciousness, it only leaves two possibilities: that consciousness can only be created by neurons (or presumably other organic matter), or it does indeed have a "magical" component somehow.

If it turns out that consciousness only arises from neurons, then this would have to be due to their internal structure for one reason or another, which we could in turn simulate.  We can continue going down the chain until we're simulating quarks, and if we don't produce consciousness at that point then consciousness must only arise from something magical.

I don't disagree that we can simulate every single neuron in a brain, I am skeptical on there being any consciousness at all.
I still think this argument begs the question, to beg the question is to assume the conclusion before proving it. In your example, this hypothetical world, where we're simulating every single neuron in the brain, it's already possible to tell whether or not there is consciousness or not in that computer model (rather than, say, the conditions in which consciousness would arise). This means that consciousness is a computational thing already even before we detect consciousness. Whether or not we detect consciousness at that point would be irrelevant because it's already stated fact in that hypothetical world.

If, for the sake of the argument it is possible it fails to produce consciousness, the answer is not automatically magic. A simulation can only show the limits of what the programmer understands of the world. Assuming we allow someone from the 1200s to make a simulation about how something works in the world, the simulation produced by someone from 1200 is going to be in stark difference to the ones we make.
We just don't know enough about consciousness to say definitively that in this simulation consciousness would be produced rather than we recognizing the conditions in which consciousness would arise.

If we were to go back to the Chinese room thought experiment, it would go like this: a cursed immortal inside a time dilation room where inside time goes by much quickly than outside time, is being handed a huge stack of information to compute. None of it he understands because they are in symbols he doesn't understand, but can nevertheless provide output because of a basic instructions manual about how to deal with the symbols. The pieces of paper this person is churning out will appear to the people outside this box as if the computational system inside the room is simulating consciousness (they're accountants, they can read large piles of paper work quite quickly). No where in he room does consciousness arise. Replace the person with a system of levers and chains and you can get the same result. Replacing it with a computer would be no different. The paper that comes out of the room is not conscious, the thing inside the room need not be conscious, and neither is the instructions manual. The three of these put together does not bring into existence a being that is conscious.

Quote
In this case I believe the speed of thought, as it were, is going to be important to have a consciousness anything like ours.  If it runs considerably slower, then the way it perceives the world and reacts to it is likely to be a bit alien to us.  Slowing the inputs down to an equivalent speed of course removes this distinction entirely.

Do you equate the mind to consciousness? You can slow a mind down, but I'm not sure you can slow down consciousness, you might be able to slow down the realization that the entity is experiencing things, but the experiencing of things seems to be instant checksum from my understanding of it. In any case, I don't know if we can confidently say that speed has anything to do with it coming into existence if that is what you're saying at all (it seems I'm quite bad at reading comprehension).

Quote
You could make your transistors more complex, but I don't really think this is a particularly good argument that computers can gain consciousness. Claiming that when we make it more complex it will accomplish something of another logical order seems to be lacking a lot of explanation in the middle about how this complexity makes X possible. Why yes, as it gets more complex, it will get new parts and perform new functions. Why do you know that one of these future functions is the one that's being doubted?

Complexity is again a necessary but not sufficient condition.  It's the same with any modern non-trivial program.  Adding small pieces together can rapidly assemble something that is much more than the sum of its parts.

I guess I don't see how transistors and neurons are fundamentally different here.  Alone they're completely useless, but when you put them together they can do amazing things.  The way they do computation is different, but that's a matter of transforming their functions into the other form.[/quote]

I think the issue is more of, we don't know what makes X, in fact we know very, very little about it. But if we make process Y very complex it would be able to reproduce X. I think this argument would only work if we have some understanding of what X is, not even about what produces it, of which we have very little in this specific circumstance.

Quote
A door, no matter how many notches you add to it, can't become the real life Jimi Hendrix.  It's not a sufficiently complicated system to do something like that.  For that matter, you can't do that with metal either.  Jimi Hendrix isn't made of metal.  A robot Jimi Hendrix is not Jimi Hendrix.

That's also not the point.  The point is that you can make an equivalent Jimi Hendrix robot (equivalence here being whatever you want, but I'm going with equivalence being his mind).

You also missed the point, in fact you highlighted mine. Just because you can make something increasingly complex, it does not mean that it will achieve X in the future. For an argument on continuing complexity to achieve some sort of phenomenon in the future, the argument will have to already know about how it will achieve this, if not physically, at least theoretically step by step.
« Last Edit: September 13, 2012, 01:14:47 am by Flare »
Logged

Graknorke

  • Bay Watcher
  • A bomb's a bad choice for close-range combat.
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #131 on: September 13, 2012, 01:17:39 am »

So what is consciousness if it isn't to do with the structure and it isn't soul magic? I genuinely cannot think of any alternatives as to how thought works.
Logged
Cultural status:
Depleted          ☐
Enriched          ☑

EveryZig

  • Bay Watcher
  • Adequate Liar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #132 on: September 13, 2012, 01:49:54 am »

Regarding the claim that it either is the very structure of the brain that causes consciousness or it being magic, I think you're posing a false dichotomy. I don't think there is sufficient enough understanding of how neurons and computational systems work that we can equate them to each other at the level of human brain function. At the level where information is passed around and saved, sure, but much more than this then I think we're simply going on hypothesis. Other than these options, I think there is a more justified one- we don't know what causes
consciousness or even what it is.
I don't think it is that false of a dichotomy when you make your options slightly less specific. I would state them as:
A) Consciousness occurs solely through a (very complex series of) normal physical processes, such as physical brain-structure, chemistry, electricity, etc. As with all normal physical interactions of a larger-than-quantum scale, these processes can be (theoretically) measured, understood, and simulated.
B) Consciousness does not occur solely through normal physical processes, making uses of souls, magic, or some other force unknown to modern science. In this case, it might not be entirely measurable and would therefore not be simulatable. (Note that even the influence mysterious-but-evidenced things such as dark matter would qualify for this category, as the means of these interacting significantly with the physical portions of the brain would be unknown to science.)

As forces unknown to physics are highly implausible without serious evidence backing them up (which there has not so far been), I do not think that B is a viable option.


Also, about the Chinese Room, there is a very important and quite testable distinction between the standard Chinese Room metaphor and an actual brain. That distinction is that the standard Chinese Room cannot learn, instead reacting to every situation from a large but static list. On the other hand, brains and brain-simulations worth considering constantly rewrite small parts of themselves in order to learn and adapt to their experience.
The amount and quality of analysis needed in order to learn and adapt from events and ideas in a coherent way is extremely close to understanding, if not the same as it.
(And yes, programs can rewrite parts of themselves. Very primitive versions of this already exist in things such as evolutionary algorithms.)
« Last Edit: September 13, 2012, 01:52:58 am by EveryZig »
Logged
Soaplent green is goblins!

Mlamlah

  • Bay Watcher
  • The Androgynous Nerd
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #133 on: September 13, 2012, 04:04:14 am »

I would say that their culture has been tried by thousands of years.  That while the Europeans have languished in the aftermath of Western Rome that they made advancements that allowed the advancement and preservation of knowledge.  Where democracy was preversed and failed, they suceeded in providing a viable alternative that only showed true flaw(from an impersonal perspective) once the industrial revolution rolled around.  Even then some may argure that had they developed the first mechanizations they would be stationing troops on our doorstep to keep us from 'revolting and terrosizing'.

Their methods are different, and although we do not approve of the moral implications to call the system wrong and unthinkable is as closed minded as one may claim me to be.

Besides, on a lighter note, if men like me were not around, with whom would you argue with? :D

Calling something you morally disagree with bad is not necessarily close-minded, especially in the case of something like women's rights. An inequality in rights based on gender in an all-encompassing way is illogical and does not provide benefit to a society unless creating a hard-wired societal underclass could be considered beneficial to a society. It is something without point or purpose beyond the purely cultural purpose it serves. It is detrimental to a society, if you consider that it harms 50% of the people in the society greatly, and is of very little tangible benefit to the other 50% of the society. It is not close-minded to say that something illogical and senseless is out-dated.

To clarify, being close-minded is to deny something without giving it thought, without opening your mind to it.
« Last Edit: September 13, 2012, 04:06:29 am by Mlamlah »
Logged

Lagslayer

  • Bay Watcher
  • stand-up philosopher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #134 on: September 13, 2012, 07:05:49 am »

Spoiler (click to show/hide)
I'm going to play devil's advocate for a bit.

People don't all think the same way. People don't always think things through. If presented with a piece of information, people may follow through on it without understanding the complications that come with it until it is too late. They may also use that information for, what is from your point of view, counterproductive purposes; the reason for this is because wanting a certain outcome is independent from intelligence. What you know is not directly correlated to what you will use that knowledge for.

Let's use an example. You are playing a multiplayer game. one guy on your team is supposed to swing around the back and flank the enemy, while the rest of you attack from the front. This player is not much of a team player, preferring to rack up kills instead of focusing on the objective. One of the enemy players is running around being a retard on the other side of the map. You notice this, now what do you do? You can keep it to yourself and the plan proceeds normally, or you can tell the team and risk your teammate running off for an easy kill, possibly ruining your entire attack strategy. The majority of your opponents escape, and possibly kill the rest of you off because that one guy wasn't there helping.

By controlling outside influence, people can be more easily directed. When and where this is acceptable is still debatable.
Pages: 1 ... 7 8 [9] 10 11 12