Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Would you ever consent to give a free-thinking AI civil rights(or an equivelant)?

Of course, all sentient beings deserve this.
Sure, so long as they do not slight me.
I'm rather undecided.
No, robots are machines.
Some people already enjoy too many rights as it is.
A limited set of rights should be granted.
Another option leaning torwards AI rights.
Another option leaning against AI rights.

Pages: 1 ... 8 9 [10] 11 12

Author Topic: Would AI qualify for civil rights?  (Read 14346 times)

Starver

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #135 on: September 13, 2012, 07:43:09 am »

Also, about the Chinese Room, there is a very important and quite testable distinction between the standard Chinese Room metaphor and an actual brain. That distinction is that the standard Chinese Room cannot learn, instead reacting to every situation from a large but static list. On the other hand, brains and brain-simulations worth considering constantly rewrite small parts of themselves in order to learn and adapt to their experience.
The amount and quality of analysis needed in order to learn and adapt from events and ideas in a coherent way is extremely close to understanding, if not the same as it.
(And yes, programs can rewrite parts of themselves. Very primitive versions of this already exist in things such as evolutionary algorithms.)

Learning need not be "rewriting" its own instructions.  A CR could have an instruction book that lets the (symbolic) equivalent of "Rover is a Dog" as an input[1] guide the 'mover' of information (immortal, computer, Jacquard Loom, whatever) to store in a functional memory system[2] some information that lets a later request that is "What is Rover?" retrieve this information and reply "A dog", rather than the "I do not know/I do not have that information yet" default of earlier times".  That would be 'learning' behaviour, as seen from those ex-camera observers.

One other solution is to have the Instruction Book tell the Mover that (if getting a "<foo> is a <bar>" set of symbols, go to the <foo> page of the book and add/change something that equates to "If interrogated as to what <foo> is, respond by saying that it is a <bar>", of course.  And also on the <bar> page you'd have them adding "an example of a <bar> is that being which is identified as <foo>"-equivalent catch.  Noting that the Mover could have as little an idea as what the meta-notation that accomplishes this is actually about as they will have by the time the appropriate questions comes in through the room's letterbox and they get to compose the reply based upon the amendments.  But, as shown above, there's other solutions, and a standard computer (or a Babbage-inspired Jacquard Loom machine with storage mechanics) can act with a static book and a pigeon-hole-equivalent system.

(By the way, this 'cursed immortal' has got to be assumed for the CR job to never be learning in their own right, or else they might derive something of a meaning...  Maybe not even correctly, especially if the syntax of the input language differs significantly from any that the Mover knows, but functionally equivalent and the Mover could lose the instruction book or access to the pigeon holes and still remember how "What is Rover?" should be answered, even if not why so.  But of course it's a basic assumption in the CR thought experiment that the Mover is (if sentient) is as uninterested in(/incapable of) learning as their non-sentient equivalent 'components'.)

Yes, this adds a component not in the most basic call-and-response CR system, but it needn't be necessarily excluded, in whatever form, unless you're positing an explicitly memory-less version, in which case you are explicitly postulating a memoryless CR (of any form), which I think is a point worth emphasising.


[1] With "That's nice to know", or similar, as the CR output, if one is 'expected' by the system that has been set up.

[2] In whatever way, which could be a set of pigeonholes for symbol-embossed cards for an Immortal 'mover', amongst others.
Logged

Telgin

  • Bay Watcher
  • Professional Programmer
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #136 on: September 13, 2012, 01:02:16 pm »

Maybe I haven't been as clear as I should have been. I have conceded that the brain has something to do with consciousness, but to say that we only have to replicate the neuro-interconnections of the brain I think is jumping the gun based on what we know about consciousness and our brain structure. The inter-connectivity of the neurons might be entirely subsidiary to how consciousness works. Breaking down consciousness into one part of the structure where it's housed doesn't seem to me to be a particularly good idea, especially when there are other explanations that compete against each other.

Regarding the claim that it either is the very structure of the brain that causes consciousness or it being magic, I think you're posing a false dichotomy. I don't think there is sufficient enough understanding of how neurons and computational systems work that we can equate them to each other at the level of human brain function. At the level where information is passed around and saved, sure, but much more than this then I think we're simply going on hypothesis. Other than these options, I think there is a more justified one- we don't know what causes
consciousness or even what it is.

What else could be the cause of it?  I've basically stated it has to be caused by physics or magic.  I don't see how it could be anything else.  It's either a product of the way the rules of our universe work (i.e. the way neurons interact with each other) or part of something incomprehensible because it's outside the rules of the universe (which as I've stated seems unlikely to me).

Quote
I don't disagree that we can simulate every single neuron in a brain, I am skeptical on there being any consciousness at all.
I still think this argument begs the question, to beg the question is to assume the conclusion before proving it. In your example, this hypothetical world, where we're simulating every single neuron in the brain, it's already possible to tell whether or not there is consciousness or not in that computer model (rather than, say, the conditions in which consciousness would arise). This means that consciousness is a computational thing already even before we detect consciousness. Whether or not we detect consciousness at that point would be irrelevant because it's already stated fact in that hypothetical world.

That's going to be a fundamental problem with any discussion on creating artificial consciousness, since we can't ever tell if it's present or not.  All I'm stating is that if we mimic the way the brain works down to its most fundamental levels, then it should logically do the same things that a real brain does, and generating consciousness is one of those things.  We won't be able to know for sure, but it seems reasonable to me that it should.  Actually, I suspect we might be able to tell in this case, because I think consciousness is a very important part of what makes a human's thought processes work like they do, so if we created an artificial brain that tried to think like a human but lacked consciousness it might produce different results.  I obviously don't know this, however.

Quote
If, for the sake of the argument it is possible it fails to produce consciousness, the answer is not automatically magic. A simulation can only show the limits of what the programmer understands of the world. Assuming we allow someone from the 1200s to make a simulation about how something works in the world, the simulation produced by someone from 1200 is going to be in stark difference to the ones we make.
We just don't know enough about consciousness to say definitively that in this simulation consciousness would be produced rather than we recognizing the conditions in which consciousness would arise.

Of course, which is why I'm still talking in complete hypotheticals.  Hypothetically if we understood enough about the brain we could create a simulation that replicated its results perfectly.  We can't do that now, but one day I don't see why we wouldn't be able to.  Simulating the brain's quantum mechanics is about as fundamental as it can get, which should produce the best simulation possible, no matter how much we learn about the way the brain works abstractly.

Quote
If we were to go back to the Chinese room thought experiment, it would go like this: a cursed immortal inside a time dilation room where inside time goes by much quickly than outside time, is being handed a huge stack of information to compute. None of it he understands because they are in symbols he doesn't understand, but can nevertheless provide output because of a basic instructions manual about how to deal with the symbols. The pieces of paper this person is churning out will appear to the people outside this box as if the computational system inside the room is simulating consciousness (they're accountants, they can read large piles of paper work quite quickly). No where in he room does consciousness arise. Replace the person with a system of levers and chains and you can get the same result. Replacing it with a computer would be no different. The paper that comes out of the room is not conscious, the thing inside the room need not be conscious, and neither is the instructions manual. The three of these put together does not bring into existence a being that is conscious.

Quote
No where in he room does consciousness arise.

How do you know this?  Is it impossible that a person running a "consciousness program" in their head is creating a second one in their head?  Is it impossible that a series of levers and chains running such a program creates consciousness?  Where does the problem lie?  Is it because there's no single point of computation (a brain)?  Is it because of the lack of neurons?  Is it because you can't imagine a disembodied sense of self somewhere in the mess?  As I've stated before, I still can't imagine how our brains do that, but they do.  And there's currently nothing we know about our brain that says that you can't replicate its function with chains and levers.

Quote
Do you equate the mind to consciousness? You can slow a mind down, but I'm not sure you can slow down consciousness, you might be able to slow down the realization that the entity is experiencing things, but the experiencing of things seems to be instant checksum from my understanding of it. In any case, I don't know if we can confidently say that speed has anything to do with it coming into existence if that is what you're saying at all (it seems I'm quite bad at reading comprehension).

That's sort of what I'm saying, but not really.  The speed at which the consciousness "thinks" would have to be tied to the speed at which its brain functions.  Slowing that down or slowing down its speed of perception would alter the way that it perceives the world, and likely cause its behavior to be different from ours.  Everything appears instant to us because, well, that's the speed that we think at.  In any case, you could theoretically slow it down as much as you wanted, but it probably becomes increasingly less like us as you do so (unless you slow down the world around it equivalently, at which point it's no different.)

Really, this all just came from a scenario I posed to myself: what happens if you "single step" consciousness like a computer processor in a debugger?  If it only experiences the world one "instant" at a time, does it still exist, or does the consciousness break down?  That's a tricky one, since it's pretty hard to imagine.

Quote
I think the issue is more of, we don't know what makes X, in fact we know very, very little about it. But if we make process Y very complex it would be able to reproduce X. I think this argument would only work if we have some understanding of what X is, not even about what produces it, of which we have very little in this specific circumstance.

I don't disagree that we don't know what makes X here, where we disagree is that you seem to think that no matter how much we understand about X we cannot make Y produce X.  Based on the fact that Y could be built upon the rules of the universe (going way back to the start of my post), I don't see how this could be the case.

Quote
You also missed the point, in fact you highlighted mine. Just because you can make something increasingly complex, it does not mean that it will achieve X in the future. For an argument on continuing complexity to achieve some sort of phenomenon in the future, the argument will have to already know about how it will achieve this, if not physically, at least theoretically step by step.

What I'm trying to say is that we can create consciousness without having to make it be a human consciousness.  That's what I mean by equivalent.  You can't make a human brain out of transistors, because human brains are made of biological matter.  You can however make a system that does the exact same things, but with transistors.  It should then produce the same effects, including generating a consciousness like ours.  If replicating the way that the brain functions doesn't produce consciousness then I just don't know what would.

I don't make any claim that I or anyone else knows how to create consciousness, so no, I can't state with absolute certainty that we'll be able to replicate it with computer systems.  Adding complexity alone absolutely will not be enough.  In fact, it may be possible to produce consciousness with computers of current complexity.  We just don't know the magic combination that produces this yet (or if we do we can't recognize it in any case).  Brute force simulation of an existing system that we believe to be conscious (a brain) is the best we can do right now, and for that we just need more processing power.
Logged
Through pain, I find wisdom.

Techhead

  • Bay Watcher
  • Former Minister of Technological Heads
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #137 on: September 13, 2012, 01:21:28 pm »

I suppose a better analogy for the CR learning is: If the CR can't solve calculus problems, you should be able to teach it calculus in the same manner that you would teach a human. Teaching the CR room how to solve a couple calculus problems should allow it to solve similar calculus problems. Strict memory-storage learning would not help it, as it would remember the problems from practice, but the problems on the test would be new to it.

A lot of these arguments have been covered in John Searle's original Minds, Brains, and Programs, and in the formal replies to that experiment and his counter-arguments. I personally believe that the software in conscious, and that the brain is hardware on which the software is stored and run. Whether or not a Turing machine or a Turing equivalent machine is a consciousness-compatible architecture is another story. (See the Turing-Church Hypothesis and Hypercomputation) If you have a consciousness-compatible architecture, then you should be able to run a conscious program on it.
Logged
Engineering Dwarves' unfortunate demises since '08
WHAT?  WE DEMAND OUR FREE THINGS NOW DESPITE THE HARDSHIPS IT MAY CAUSE IN YOUR LIFE
It's like you're all trying to outdo each other in sheer useless pedantry.

Graknorke

  • Bay Watcher
  • A bomb's a bad choice for close-range combat.
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #138 on: September 13, 2012, 03:26:57 pm »

People simulate consciousnesses in their own minds all of the time.
It's a basic element of human interaction. People create an idea of that person themselves to try and decide what would be appropriate to say, which I would say counts as simulating consciousness pretty much.
Logged
Cultural status:
Depleted          ☐
Enriched          ☑

Eagle_eye

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #139 on: September 13, 2012, 05:49:37 pm »

Not even close. That's just extrapolating from observed behaviors. Consciousness is the actual experiences someone undergoes: You experience pain, not rupture of cell walls leading to an electrical impulse in a nerve. You see the color red, not an electrical impulse caused by photons with certain wavelengths hitting your retina.
Logged

Graknorke

  • Bay Watcher
  • A bomb's a bad choice for close-range combat.
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #140 on: September 13, 2012, 06:11:52 pm »

Not even close. That's just extrapolating from observed behaviors. Consciousness is the actual experiences someone undergoes: You experience pain, not rupture of cell walls leading to an electrical impulse in a nerve. You see the color red, not an electrical impulse caused by photons with certain wavelengths hitting your retina.
So consciousness is the process of the brain parsing the meaning of nerve signals? That covers a whole load of things then. I would assume most of the mammals and probably quite a few of the other animals too.
Logged
Cultural status:
Depleted          ☐
Enriched          ☑

Telgin

  • Bay Watcher
  • Professional Programmer
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #141 on: September 13, 2012, 06:13:27 pm »

I think it's generally believed that most animals, if not all vertebrates and most invertebrates, are indeed conscious.
Logged
Through pain, I find wisdom.

Graknorke

  • Bay Watcher
  • A bomb's a bad choice for close-range combat.
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #142 on: September 13, 2012, 06:16:58 pm »

I think it's generally believed that most animals, if not all vertebrates and most invertebrates, are indeed conscious.
I was going to say that if it encompassed all animals then targets for an AI could be set lower, but then it would probably be a tiny jump from any animal brain to a human one than it would be getting to simulate a brain on any level.
Logged
Cultural status:
Depleted          ☐
Enriched          ☑

Eagle_eye

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #143 on: September 13, 2012, 06:17:48 pm »

I think it's generally believed that most animals, if not all vertebrates and most invertebrates, are indeed conscious.

Pretty sure arthropods aren't conscious.

Also, animal to human brain encompasses a huge range of things. From the brain of a mammal to a human would probably a relatively small jump. The brain of a worm, however, would be vastly easier to simulate.
Logged

Telgin

  • Bay Watcher
  • Professional Programmer
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #144 on: September 13, 2012, 06:50:43 pm »

I'm not entirely sure I'd agree that arthropods aren't conscious, but I do agree that there is a case to be made for that argument.  Their behavior is much simpler than ours (especially for something like an ant), so there is a convincing case to be made for them being pure input -> output machines.  However, I'd be pretty surprised if something like a lobster wasn't conscious, since it has a brain like any vertebrate (if a bit simpler), and I don't see why it shouldn't be conscious.  Much less intelligent, sure, but that's not a necessary part of consciousness.

You can get into the whole question of as the creature gets simpler and has a simpler brain, when does consciousness end, if ever?  That's the ultimate question that no one has an answer to.  Some arthropods, like ants, may well not be conscious, but I think it's a bit broad of a stroke to paint them all that way.  Ants may be conscious too, if extremely stupid and narrow minded.  We'll never know.

In any case, I completely agree that it should be simpler to reproduce the conscious component of a simpler brain (like a mouse, or a lobster) out of the simple fact that it's simpler.  Theoretically you should be able to extrapolate from there, but that's getting into complicated and muddy areas of neuroscience where we can't really flat say what makes something smarter, or conscious in the first place.  Unfortunately, simulating a lobster brain won't really get us any closer to answering that question in and of itself, although it might make the analysis that does lead us there easier.
Logged
Through pain, I find wisdom.

pisskop

  • Bay Watcher
  • Too old and stubborn to get a new avatar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #145 on: September 14, 2012, 10:32:53 am »

Calling something you morally disagree with bad is not necessarily close-minded, especially in the case of something like women's rights. An inequality in rights based on gender in an all-encompassing way is illogical and does not provide benefit to a society unless creating a hard-wired societal underclass could be considered beneficial to a society. It is something without point or purpose beyond the purely cultural purpose it serves. It is detrimental to a society, if you consider that it harms 50% of the people in the society greatly, and is of very little tangible benefit to the other 50% of the society. It is not close-minded to say that something illogical and senseless is out-dated.

To clarify, being close-minded is to deny something without giving it thought, without opening your mind to it.

Spoiler (click to show/hide)

- - - - - - -


What, morally, is the difference between an immigrant and a native? Their presence is against the law, but that alone does not make a thing immoral, as there can be unjust laws. They might not place value on learning our history, but a large portions of our own citizens have an astonishing ignorance of the same history, and in some cases would not pass an entrance test. They care for making a living for their family far more than which country they do it in, but this trait is again shared by many citizens. They do not pay taxes like citizens, but they have no opportunity to do so without their livelihood being destroyed. The only difference I can see that applies to all cases is that they were born in a (sometimes very slightly) different location, which seems to me as irrelevant to morality as height or hair color.

Spoiler (click to show/hide)

- - - - - - -
I believe that whatever creates the most happiness for the greatest number of conscious entities for the longest period of time is best. I'm aware that we can't objectively measure happiness, but I believe that that is simply a limitation of neuroscience, not something that is fundamentally impossible, and that in the meantime, we can certainly determine that some things are bad, and some things are good. Hunger is a bad experience. Pain is a bad experience. We may not be able to determine with certainty which actions will produce the most happiness, but we can get it fairly close. If you don't think happiness in others is inherently good, then you can justify it selfishly as well: If everyone behaved that way, you personally would be almost guaranteed to have a comfortable life.

I don't have any specific evidence to cite for the medieval islamic thing at the moment, as it's been a long time since I've learned about that period, but I will look.


Spoiler (click to show/hide)
I feel like I am leaving something out, but I have to get back to work...


- - - - - - -

I think it's generally believed that most animals, if not all vertebrates and most invertebrates, are indeed conscious.

Spoiler (click to show/hide)
« Last Edit: September 14, 2012, 11:36:37 am by pisskop »
Logged
Pisskop's Reblancing Mod - A C:DDA Mod to make life a little (lot) more brutal!
drealmerz7 - pk was supreme pick for traitor too I think, and because of how it all is and pk is he is just feeding into the trollfucking so well.
PKs DF Mod!

pisskop

  • Bay Watcher
  • Too old and stubborn to get a new avatar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #146 on: September 14, 2012, 11:47:22 am »

I think it's generally believed that most animals, if not all vertebrates and most invertebrates, are indeed conscious.

Pretty sure arthropods aren't conscious.

Also, animal to human brain encompasses a huge range of things. From the brain of a mammal to a human would probably a relatively small jump. The brain of a worm, however, would be vastly easier to simulate.

I just read this.  Ill look into it, but organization seems to be the key difference between monkey brain and human brain.

http://www.howcomyoucom.com/selfnews/viewnews.cgi?newsid999709659,88454,.shtml
« Last Edit: September 14, 2012, 11:49:27 am by pisskop »
Logged
Pisskop's Reblancing Mod - A C:DDA Mod to make life a little (lot) more brutal!
drealmerz7 - pk was supreme pick for traitor too I think, and because of how it all is and pk is he is just feeding into the trollfucking so well.
PKs DF Mod!

Telgin

  • Bay Watcher
  • Professional Programmer
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #147 on: September 14, 2012, 12:15:10 pm »

It has been agrued to varying degrees of success that 'lower' animals experience life differently and with different levels of conciousness.  This is mostly just conjecture, of course, but memory and a sense of 'I' have a bit to do with it.  Can an animal identify itself in a mirror?  Does it prioritize kith needs over its own?  How social is it?  How well can it learn?  How long does it live?  To what degree will it experiement?

I've got no doubt that things like ladybugs experience the world differently than we do, and might even be on a level of consciousness less than ours (if one can imagine such a thing).

I still believe, for lack of any evidence to the contrary, that they are conscious however.  We generally believe that we're conscious, and that animals like us (monkeys) are conscious, which you can follow down the trail through all of the vertebrates.  Although invertebrates vary tremendously, I'd still like to think that since they have brains (arthropods anyway) and exhibit pretty complex behavior at times that they probably think (at least in a very abstract manner) like we do and thus are probably conscious on some level.

Hardly bullet proof logic, but it's again one of those things we'll never know.  We can certainly test for intelligence (such as the mirror test), but intelligence and consciousness aren't the same thing.  I'm pretty sure I've watched my pet cockatiel fail the mirror test over and over again, yet I believe she's conscious.  She may not have a concept of self like we do, but she probably does still "experience" the world.
« Last Edit: September 14, 2012, 12:19:45 pm by Telgin »
Logged
Through pain, I find wisdom.

Lagslayer

  • Bay Watcher
  • stand-up philosopher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #148 on: September 14, 2012, 02:03:08 pm »

Spoiler (click to show/hide)
If there were a poster child for intelligent, conscious invertibrates, it would be octopuses. They can solve problems (often better than Chimpanzees), have good short and long-term memory, and can recognize themselves in a mirror. Some species have even taken up pack hunting because their natural food is becoming more scarce. To put this in perspective, octopuses are usually quite solitary.

Montague

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #149 on: September 14, 2012, 09:34:32 pm »

I'm not sure anybody would create a true AI if it had wide civil rights.

What would be an AI's purpose if it could refuse to do what it was designed to do and nobody could delete it or unplug it or whatever?
Logged
Pages: 1 ... 8 9 [10] 11 12