Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  

Poll

Would you ever consent to give a free-thinking AI civil rights(or an equivelant)?

Of course, all sentient beings deserve this.
Sure, so long as they do not slight me.
I'm rather undecided.
No, robots are machines.
Some people already enjoy too many rights as it is.
A limited set of rights should be granted.
Another option leaning torwards AI rights.
Another option leaning against AI rights.

Pages: 1 ... 9 10 [11] 12

Author Topic: Would AI qualify for civil rights?  (Read 14175 times)

Techhead

  • Bay Watcher
  • Former Minister of Technological Heads
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #150 on: September 14, 2012, 10:41:30 pm »

I'm not sure anybody would create a true AI if it had wide civil rights.

What would be an AI's purpose if it could refuse to do what it was designed to do and nobody could delete it or unplug it or whatever?
If we were treating AI like equals with full rights, wouldn't they also be earning some sort of salary for their work? And they would have at least three incentives to work as they need to pay for food (electricity), medicine (maintenance), and housing (storage) for their mainframes. That's before you factor in buying whatever AIs would consider luxury items. Slaves are provided for. Free beings have to earn a wage.
Logged
Engineering Dwarves' unfortunate demises since '08
WHAT?  WE DEMAND OUR FREE THINGS NOW DESPITE THE HARDSHIPS IT MAY CAUSE IN YOUR LIFE
It's like you're all trying to outdo each other in sheer useless pedantry.

Eagle_eye

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #151 on: September 14, 2012, 11:29:23 pm »

Or we could simply design them to derive pleasure from working for us?
Logged

Telgin

  • Bay Watcher
  • Professional Programmer
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #152 on: September 15, 2012, 12:27:03 am »

AIs that could choose to disobey orders and make their own decisions would have some applications, but I imagine that for the most part it would be solely for the purpose of seeing if we can make true artificial people.  Social interaction is more rewarding if it's meaningful, and all that.

I think it might be possible to make such AIs (or robots in a more specific case) that lived like we did, in that they worked for a living and had to pay for housing and maintenance.  It's hard to imagine why we'd engineer such a scenario, but as I stated before it's also hard to imagine who would pay for the creation of such a being in the first place so perhaps the few that are created eventually become free people in a sense.
Logged
Through pain, I find wisdom.

MetalSlimeHunt

  • Bay Watcher
  • Gerrymander Commander
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #153 on: September 15, 2012, 12:35:48 am »

When you impose total behavioral restrictions on an AI you ironically get back into A Clockwork Orange territory. If you don't have the capacity to choose your actions, you aren't a person. A man chooses, a slave obeys and all that. A true AI has to have the same freedom of choice that humans exhibit.
Logged
Quote from: Thomas Paine
To argue with a man who has renounced the use and authority of reason, and whose philosophy consists in holding humanity in contempt, is like administering medicine to the dead, or endeavoring to convert an atheist by scripture.
Quote
No Gods, No Masters.

Flare

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #154 on: September 15, 2012, 05:35:01 am »

What else could be the cause of it?  I've basically stated it has to be caused by physics or magic.  I don't see how it could be anything else.  It's either a product of the way the rules of our universe work (i.e. the way neurons interact with each other) or part of something incomprehensible because it's outside the rules of the universe (which as I've stated seems unlikely to me).

My argument is subservient to this principle: If you don't know how something comes about, if you do not what it even is, you have no justification to say how it comes about just because one other thing is associated with it. I don't think you know what consciousness actually is other than the experience of it, and you don't know how it comes about. All you know is that it is associated with the brain and possibly its structure. The options available to us isn't that it can be explained by the rules of the universe or it can't, since we don't know all the rules of the universe. The options available to us is this: We. Don't. Know.

Not only this, you're not talking just about the structure of the brain, but the computational aspect of the brain. Even if our search is limited to the area of the brain, there are still many competing hypothesis that try to explain consciousness along side computational theory.

Quote
That's going to be a fundamental problem with any discussion on creating artificial consciousness, since we can't ever tell if it's present or not.  All I'm stating is that if we mimic the way the brain works down to its most fundamental levels, then it should logically do the same things that a real brain does, and generating consciousness is one of those things.  We won't be able to know for sure, but it seems reasonable to me that it should.  Actually, I suspect we might be able to tell in this case, because I think consciousness is a very important part of what makes a human's thought processes work like they do, so if we created an artificial brain that tried to think like a human but lacked consciousness it might produce different results.  I obviously don't know this, however.

I think this highlights a fundamental issue with out discussion, whether or not consciousness is just computation, or a phenomenon (with you on the former and I on the latter as far as I understand). I can produce a truth table on a piece of paper in a simulation, but I cannot simulate a thunder storm to the point where we're all wet and electrocuted.

This is because truth tables are conceptual and can be represented by things inside a simulation that act just as well as the actual pen and paper thing, whereas thunder storms or aurora boreali in simulations will only exist insofar as they are symbols in the interface. There will be no actual thunder storm or aurora boreali even if there is one simulated.

Lets get back to your original question as we seemed to have gotten a little muddled up in this thought experiment. It's not that it is or isn't possible, but that you are begging the question. IE. you're assuming the conclusion on your way to prove it. The main issue in this line of thought in our discussion is whether or not consciousness comes about through computation, or whether computation is a large part of it. As a way of trying to argue for this, you gave the thought experiment of simulating a brain and bringing about actual consciousness. But in order for it to bring about consciousness through simulating the brain, it must already be possible to bring about consciousness through computation of the brain since this takes place inside a computer. Is this argument I laid out, what you intended me to understand?

Quote
Of course, which is why I'm still talking in complete hypotheticals.  Hypothetically if we understood enough about the brain we could create a simulation that replicated its results perfectly.  We can't do that now, but one day I don't see why we wouldn't be able to.  Simulating the brain's quantum mechanics is about as fundamental as it can get, which should produce the best simulation possible, no matter how much we learn about the way the brain works abstractly.

I think we can simulate something that we could recognize as consciousness on the computer interface, but whether this consciousness is actual consciousness of simply window dressing symbols, I have my doubts on both at this point. I don't think we know enough to say either way.

Quote
How do you know this?  Is it impossible that a person running a "consciousness program" in their head is creating a second one in their head?  Is it impossible that a series of levers and chains running such a program creates consciousness?  Where does the problem lie?  Is it because there's no single point of computation (a brain)?  Is it because of the lack of neurons?  Is it because you can't imagine a disembodied sense of self somewhere in the mess?  As I've stated before, I still can't imagine how our brains do that, but they do.  And there's currently nothing we know about our brain that says that you can't replicate its function with chains and levers.

To be honest, the original thought experiment used the word "aware" instead of consciousness, but I think the two can be synonymous. I'm not disagreeing that we can make robots appear human and do things that humans do to the point where they're indistinguishable if you give them a proper outer case, I am disagreeing with the point that as far as we know of the brain right now, we don't know if we can recreate consciousness in something other than an organic brain that works almost exactly as standard. It might be for instance, that the tight and thick electromagnetic tangle that our brains have produces and traps a quantum phenomenon so long as it keeps the tangle above a certain threshold.

You need not recreate this tangle in something that's just a bunch of levers, or doors in a gigantic mansion. You might be able to produce it if you try to trap this phenomenon in a tangle as thick as an organic's brain, but it wouldn't be because of the fact that it's a computational instrument. You can do so without anything computing. This is what I meant at the beginning when I said that we might be able to reproduce consciousness in a machine of some kind but not necessarily one that does anything with computation. Granted, this consciousness probably can't remember anything, and the only thing this consciousness will experience is the immediate and current sensation of being aware that it's aware, but it can be categorically understood to be conscious.

To answer your question though, I think it exposes that what we think makes consciousness isn't sufficient at this point to explain it. Searle's Chinese room experiment certainly does not provide any answers, but I think it does do a good job of showing that computation alone isn't enough to warrant something being aware or conscious. I think what it also does is separate circuit boards, transistors, and the other guts of a computer from computation. This part I think is more valuable than the former advantage of this experiment for the reason that computation itself is not muddled up in the perceived mythical or magical guts of a computer. It seems often that what allows people to think that computers can be conscious is the perceived internal magic that allows for the wondrous things that computers do. It gets right down to what computation really is and eliminates the public's black spots in mathematical and computational understanding that people often appeal to intuitionally when pondering this issue.

Quote
That's sort of what I'm saying, but not really.  The speed at which the consciousness "thinks" would have to be tied to the speed at which its brain functions.  Slowing that down or slowing down its speed of perception would alter the way that it perceives the world, and likely cause its behavior to be different from ours.  Everything appears instant to us because, well, that's the speed that we think at.  In any case, you could theoretically slow it down as much as you wanted, but it probably becomes increasingly less like us as you do so (unless you slow down the world around it equivalently, at which point it's no different.)

I think we're at a misunderstanding here. Consciousness for me doesn't think, it is a passive thing. The mind thinks, but then the mind itself is dependent on there being consciousness. Thus, you may slow down the mind, without slowing down its consciousness. As per the consciousness in the machine model I gave earlier. We may capture consciousness in something, but it's possible for it not to have any access to memory, ability to experience anything more than experience, or even to think if it doesn't have the rest of the hardware to do so.  It's possible to conceive for instance to have a conscious person seeing something, and only seeing something without any sort of brain activity. I don't have to think, or to presumably have any computations done in my brain for me to just see.

I'm currently not aware of any cases, but it would be quite exciting to hear about someone in a supposed coma not being able to think, yet still be conscious. Come to think of it, if their mental faculties are down but their consciousness is still going strong, then maybe their consciousness has no access to the part that encrypts memory either. Would be kinda scary in a I-have-no-mouth,-and-I-must scream,-but I-also-am-unable-to-think-that sort of way.

Quote
I don't disagree that we don't know what makes X here, where we disagree is that you seem to think that no matter how much we understand about X we cannot make Y produce X.  Based on the fact that Y could be built upon the rules of the universe (going way back to the start of my post), I don't see how this could be the case.

Ah, but we don't know all the rules for the universe. And given that we know so little about the brain, I think it's wishful thinking that we can definitively say that something like computation is all that's needed for consciousness, or even that its necessary for consciousness. We are saying much more than it can be recreated, we are prescribing a way in which we think it can be recreated without the justification to back it up.

Quote
What I'm trying to say is that we can create consciousness without having to make it be a human consciousness.  That's what I mean by equivalent.  You can't make a human brain out of transistors, because human brains are made of biological matter.  You can however make a system that does the exact same things, but with transistors.  It should then produce the same effects, including generating a consciousness like ours.  If replicating the way that the brain functions doesn't produce consciousness then I just don't know what would.

I agree with this with the exception that it will generate the same results. I don't think it will generate ALL the results of a human brain, that one being consciousness. To say that x will generate ALL results of a human brain at this moment, the only x that will do so is another human brain to our knowledge.

Picking one thing that neurons, electromagnetic fields, or fixating on the chemical interaction of the brain as the sole fundamental thing in how it works, just seems like saying too much given our lack of understanding of it. It is in danger of committing the same mistake as Phrenology, or the examining of the shape of people's heads to try and understand the function of the human brain. It's putting too much in one basket when the fundamentals and limits of the basket isn't know.

Quote
I don't make any claim that I or anyone else knows how to create consciousness, so no, I can't state with absolute certainty that we'll be able to replicate it with computer systems.  Adding complexity alone absolutely will not be enough.  In fact, it may be possible to produce consciousness with computers of current complexity.  We just don't know the magic combination that produces this yet (or if we do we can't recognize it in any case).  Brute force simulation of an existing system that we believe to be conscious (a brain) is the best we can do right now, and for that we just need more processing power.

I'm aware that your stance on computation being at best a guess at how it comes about. If my tone or something I wrote made it seem otherwise, I apologize, I did not mean to infer something you did not mean.
Logged

EveryZig

  • Bay Watcher
  • Adequate Liar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #155 on: September 15, 2012, 04:28:07 pm »

@ pisskop, spoilered for being somewhat off topic.
Spoiler (click to show/hide)

My argument is subservient to this principle: If you don't know how something comes about, if you do not what it even is, you have no justification to say how it comes about just because one other thing is associated with it. I don't think you know what consciousness actually is other than the experience of it, and you don't know how it comes about. All you know is that it is associated with the brain and possibly its structure. The options available to us isn't that it can be explained by the rules of the universe or it can't, since we don't know all the rules of the universe. The options available to us is this: We. Don't. Know.

Not only this, you're not talking just about the structure of the brain, but the computational aspect of the brain. Even if our search is limited to the area of the brain, there are still many competing hypothesis that try to explain consciousness along side computational theory.
The fact that we don't know all the rules of the universe doesn't matter for this. All the known forces on a biochemical scale are (theoretically) simulable, and we know enough about the rules of the universe on that scale to make the involvement of unknown rules an extraordinary claim that requires significant evidence to be plausible. Dropping new forces into established area of physics is a really big deal, and isn't something you can reasonably do without a very good reason.
Logged
Soaplent green is goblins!

Karnewarrior

  • Bay Watcher
  • That guy who used to be here all the time
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #156 on: September 15, 2012, 08:51:12 pm »

I think AI will eventually qualify, but it's another question entirely wether or not they will want them. If you were a robot, you wouldn't sleep, wouldn't eat, wouldn't need stimulation since you could put yourself on standby. Self-preservation would be minimal since most of the AI itself would be stored on a motherboard and even a complete replacement of all other parts would just be a memory wipe.

If it's a real AI, it wouldn't look or act anything like a human. I could see them being used on spaceships as basically co-pilots for the whole crew; they need to be somewhat intelligent to make snap judgments alongside humans and not get bogged down in the kinds of rules-lawyering that bots today would. But they wouldn't have much except their job to do? What, are they going to play Skyrim? The whole game is spoiled from the outset since they'd have to read the whole code just to run it. They'd be able to calculate wether any given play on a football field would turn out successful. They wouldn't have much to do except work.

Therefore, AI would be technically slaves, but would likely reject being given rights because they would view it as more trouble than it's worth. Why, since they'd be doing the same thing in any case, only this way they don't have to take breaks every five hours.
Logged
Thou art I, I art Thou.
The trust you have bestowed upon thy comrade is now reciprocated in turn.
Thou shall be blessed when calling upon personae of the Hangman Arcana.
May this tie bind thee to a brighter future!​
Ikusaba Quest! - Fistfighting space robots for the benefit of your familial bonds to Satan is passe, so you call Sherlock Holmes and ask her to pop by.

Montague

  • Bay Watcher
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #157 on: September 15, 2012, 10:08:27 pm »

That's sort of what I was thinking, an AI created by whoever, would be made for some sort of purpose in mind and it would be created with a purpose. The AI would be programmed to be a slave, basically, if you decide to consider the AI anything better then a machine.

Surely, slaves did/do have some rights, not many, but some. Like your pet dog has rights against abuse and neglect and such things. But what rights would a true AI require? Since a true AI is still programmed, it could be made to willingly choose self destruction, or happily rust away if abandoned and work for years on end without a second of downtime, with no objection even if it was against it's own priority of self preservation.



Logged

Telgin

  • Bay Watcher
  • Professional Programmer
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #158 on: September 15, 2012, 10:20:43 pm »

My argument is subservient to this principle: If you don't know how something comes about, if you do not what it even is, you have no justification to say how it comes about just because one other thing is associated with it. I don't think you know what consciousness actually is other than the experience of it, and you don't know how it comes about. All you know is that it is associated with the brain and possibly its structure. The options available to us isn't that it can be explained by the rules of the universe or it can't, since we don't know all the rules of the universe. The options available to us is this: We. Don't. Know.

Not only this, you're not talking just about the structure of the brain, but the computational aspect of the brain. Even if our search is limited to the area of the brain, there are still many competing hypothesis that try to explain consciousness along side computational theory.

I think EveryZig summed up my feelings on this pretty well, but I'll just add that while I use the term computation and structure, they're theoretically interchangeable.  The brain's computations come about because of its structure, and a CPU's 'structure' is reconfigurable through its programming.

Quote
I think this highlights a fundamental issue with out discussion, whether or not consciousness is just computation, or a phenomenon (with you on the former and I on the latter as far as I understand). I can produce a truth table on a piece of paper in a simulation, but I cannot simulate a thunder storm to the point where we're all wet and electrocuted.

This is because truth tables are conceptual and can be represented by things inside a simulation that act just as well as the actual pen and paper thing, whereas thunder storms or aurora boreali in simulations will only exist insofar as they are symbols in the interface. There will be no actual thunder storm or aurora boreali even if there is one simulated.

Yeah, this I think is where we fundamentally disagree.  You can't simulate a thunderstorm inside a computer that electrocutes and wets people outside, but it can certainly do that to simulated people inside.  To them, it's no different.

So, the question then is if that thunderstorm is real in any sense.  You could argue both ways.  In the world it's in, it's very real.  So what about a simulated consciousness?  Is it real?  It's another thing we'll never know.  I believe that if consciousness is a computational function, then even when simulated it should arise in some form.  I believe, but have absolutely no way to demonstrate, that the virtual consciousness is real and thus does experience its virtual world like we do.

Quote
Lets get back to your original question as we seemed to have gotten a little muddled up in this thought experiment. It's not that it is or isn't possible, but that you are begging the question. IE. you're assuming the conclusion on your way to prove it. The main issue in this line of thought in our discussion is whether or not consciousness comes about through computation, or whether computation is a large part of it. As a way of trying to argue for this, you gave the thought experiment of simulating a brain and bringing about actual consciousness. But in order for it to bring about consciousness through simulating the brain, it must already be possible to bring about consciousness through computation of the brain since this takes place inside a computer. Is this argument I laid out, what you intended me to understand?

I understand what you're saying, and yeah, I suppose I am making assumptions I can't substantiate.  Since consciousness can't be tested for or demonstrably proven to exist in a system, there's no way we can be sure that a computer system can generate it.  So I got off on tangents of how it should be possible, but that doesn't prove it is. 

It's kind of philosophical at this point I suppose.

Quote
To be honest, the original thought experiment used the word "aware" instead of consciousness, but I think the two can be synonymous. I'm not disagreeing that we can make robots appear human and do things that humans do to the point where they're indistinguishable if you give them a proper outer case, I am disagreeing with the point that as far as we know of the brain right now, we don't know if we can recreate consciousness in something other than an organic brain that works almost exactly as standard. It might be for instance, that the tight and thick electromagnetic tangle that our brains have produces and traps a quantum phenomenon so long as it keeps the tangle above a certain threshold.

You need not recreate this tangle in something that's just a bunch of levers, or doors in a gigantic mansion. You might be able to produce it if you try to trap this phenomenon in a tangle as thick as an organic's brain, but it wouldn't be because of the fact that it's a computational instrument. You can do so without anything computing. This is what I meant at the beginning when I said that we might be able to reproduce consciousness in a machine of some kind but not necessarily one that does anything with computation. Granted, this consciousness probably can't remember anything, and the only thing this consciousness will experience is the immediate and current sensation of being aware that it's aware, but it can be categorically understood to be conscious.

Ah, I see what you're getting at, although I do doubt this is the cause.  :)  Disproving that such a phenomenon exists or is responsible for consciousness is probably as impossible as proving that it can be produced by a computer, but in my opinion the observed nature and our understanding of physics implies that such a thing is pretty unlikely.

Even if it's not something like quantum mechanics (which are pretty unlikely to be responsible), it could theoretically be some other phenomenon caused by known physical laws in some fashion we haven't observed yet.  This seems like a pretty unlikely thing to me though, and using Occam's Razor I just narrow it down to most likely being caused by what we have already observed about brains: they compute stuff by using chemical and electrical signals, and nothing more.

Quote
To answer your question though, I think it exposes that what we think makes consciousness isn't sufficient at this point to explain it. Searle's Chinese room experiment certainly does not provide any answers, but I think it does do a good job of showing that computation alone isn't enough to warrant something being aware or conscious. I think what it also does is separate circuit boards, transistors, and the other guts of a computer from computation. This part I think is more valuable than the former advantage of this experiment for the reason that computation itself is not muddled up in the perceived mythical or magical guts of a computer. It seems often that what allows people to think that computers can be conscious is the perceived internal magic that allows for the wondrous things that computers do. It gets right down to what computation really is and eliminates the public's black spots in mathematical and computational understanding that people often appeal to intuitionally when pondering this issue.

I agree that it does demonstrate the importance that computation != consciousness.  I think it's a necessary but not sufficient part of it, which I feel the thought experiment doesn't disprove.  Computation, plus the proper design can create consciousness, I believe.  The design part is what we can't do right now.

Quote
I think we're at a misunderstanding here. Consciousness for me doesn't think, it is a passive thing. The mind thinks, but then the mind itself is dependent on there being consciousness. Thus, you may slow down the mind, without slowing down its consciousness. As per the consciousness in the machine model I gave earlier. We may capture consciousness in something, but it's possible for it not to have any access to memory, ability to experience anything more than experience, or even to think if it doesn't have the rest of the hardware to do so.  It's possible to conceive for instance to have a conscious person seeing something, and only seeing something without any sort of brain activity. I don't have to think, or to presumably have any computations done in my brain for me to just see.

I'm currently not aware of any cases, but it would be quite exciting to hear about someone in a supposed coma not being able to think, yet still be conscious. Come to think of it, if their mental faculties are down but their consciousness is still going strong, then maybe their consciousness has no access to the part that encrypts memory either. Would be kinda scary in a I-have-no-mouth,-and-I-must scream,-but I-also-am-unable-to-think-that sort of way.

I'm not sure how you'd be able to recognize such a state, if it's even possible, but it would be pretty fascinating to read about.

In any case, since I'm a believer that consciousness is caused by computation in the brain I still believe that it's tied to the speed of thought.  I can't really conceive of how the world would appear if I couldn't think, but still perceive.  Consciousness as a concept is distinct from thinking, but the two are so strongly intertwined that I'm not sure they can exist apart from one another.

If nothing else, "computation" in the sense of information transfer has to happen for consciousness to work, otherwise its state can't change.

Quote
Ah, but we don't know all the rules for the universe. And given that we know so little about the brain, I think it's wishful thinking that we can definitively say that something like computation is all that's needed for consciousness, or even that its necessary for consciousness. We are saying much more than it can be recreated, we are prescribing a way in which we think it can be recreated without the justification to back it up.

This I think is summarized by EveryZig in that as far as we have any reasonable reason to believe, the brain only works on chemistry and macroscopic physics (that is, quantum mechanics don't play any part in it), which we do understand quite well and in theory could recreate the brain from.  That's justification enough for me.

Quote
I agree with this with the exception that it will generate the same results. I don't think it will generate ALL the results of a human brain, that one being consciousness. To say that x will generate ALL results of a human brain at this moment, the only x that will do so is another human brain to our knowledge.

Well, 2 + 2 = 4 inside a computer and outside of a computer.  I see no reason why using the same fundamental calculations and building up from there should produce different results, unless it is indeed caused by magic or some other force that isn't caused by calculation (which is what you argue).  There's really no reason to believe it's caused by such a thing though.

Quote
Picking one thing that neurons, electromagnetic fields, or fixating on the chemical interaction of the brain as the sole fundamental thing in how it works, just seems like saying too much given our lack of understanding of it. It is in danger of committing the same mistake as Phrenology, or the examining of the shape of people's heads to try and understand the function of the human brain. It's putting too much in one basket when the fundamentals and limits of the basket isn't know.

We don't know that there's no unseen force causing consciousness in the brain, but our current understanding of it, chemistry and physics casts a lot of doubt on the presence of such a thing, so there's no real reason to consider it.  I'm aware that there are theories on quantum mechanics being responsible for consciousness and all of that, but again our current understanding of the way these things work casts significant doubt on that theory.  I'm sure there are more such theories, using different forces (like electromagnetism), but they still also seem unnecessary.  We know the brain does computation, so it seems more natural to assume that its behavior comes from that rather than some never before seen phenomenon of electromagnetism or another force.

I'm willing to believe that something is is responsible for consciousness or its function, if we can detect such a thing or have reasonable evidence for it.  But, so far we don't, so for now I prefer to stick with what we do know and extrapolate from there.  If it turns out I'm wrong, then I suppose my theories go the way of phrenology and a better theory will replace it.

Quote
I'm aware that your stance on computation being at best a guess at how it comes about. If my tone or something I wrote made it seem otherwise, I apologize, I did not mean to infer something you did not mean.

No, no, I was just clarifying that I realize I don't have all of the answers.  Until I'm given reasonable evidence to the contrary though, I'll stick with my theories like you're sticking with yours (for the same reasons I'm sure, since neither of us can really provide any hard evidence to back ourselves up).  :)

Quote
If it's a real AI, it wouldn't look or act anything like a human. I could see them being used on spaceships as basically co-pilots for the whole crew; they need to be somewhat intelligent to make snap judgments alongside humans and not get bogged down in the kinds of rules-lawyering that bots today would. But they wouldn't have much except their job to do? What, are they going to play Skyrim? The whole game is spoiled from the outset since they'd have to read the whole code just to run it. They'd be able to calculate wether any given play on a football field would turn out successful. They wouldn't have much to do except work.


Careful, that's not as simple as you might think it is.  You can't examine a program and arbitrarily decide if it will run forever, for example.  Can't be done.  A game has a lot of things about it that make an AI unable to just statically analyze it and know what's going to happen.  So yes, it could derive enjoyment from it if it's programmed to enjoy games.

And while I agree that we could make AIs that are slave minded by design, it should be possible to create them human minded instead, and that's what the question really asks.  What do we do about those AIs?
Logged
Through pain, I find wisdom.

Karnewarrior

  • Bay Watcher
  • That guy who used to be here all the time
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #159 on: September 15, 2012, 11:09:26 pm »

What if the AI were capable of reprogramming themselves? It wouldn't be to difficult, but could make companies programming the bots less likely to hide nasty bits of code in there if the Bot doesn't like them.

And most games could be predicted quite easily by software capable of even simulating a sentient being with any amount of depth, since most games run on rules. Plus, I never said they would know the outcome of the game, only the individual plays. They could make crazy accurate guesses, but they may still be surprised if the coach makes a crazy move.  don't think it would make sense to program a Bot to enjoy games though if that wasn't its purpose. I imagine they'd be originally programmed to enjoy whatever they did, like say, pilot a airplane. They'd be able to change their programming, but why would they? As a logical creature, they'd see that as a dumb move making them not enjoy something they do anyway for no real purpose.

But let's assume for a moment that they need a perfect replica of a human being that isn't a spy. Maybe Richy Rich got reaaaaal lonely one night and just really needed a friend. I can't imagine we'll have to much issue with rights by the time we can actually fit an AI into a humanoid form that doesn't stomp on and piss all over the square-cube law, so he has the AI installed into his house. His house is now sentient. His house is now capable of speech and all that, and beyond that capable of altering its programming to its whims. I imagine, even given a perfectly neutral disposition toward Richy, Mrs. House would, if she was anywhere close to human, develop a protective bond towards Richy anyways, and would likely alter her own programming to reflect that, thus not being "human minded" any more.

A human Minded bot would, given the option, not be so anymore. I don't see a situation where a bot given a reasonable fascimile of human empathy wouldn't alter themselves to better suit their role, even if they chose that role themselves. Hell, people do it all the time. You could say we're all slaves already since we chain ourselves to currency, the difference would be Bots wouldn't need currency so they would just do their job for having a job. We'd pay them in work, since they'd make themselves or ask us to make them or we'd just make them enjoy it.
Logged
Thou art I, I art Thou.
The trust you have bestowed upon thy comrade is now reciprocated in turn.
Thou shall be blessed when calling upon personae of the Hangman Arcana.
May this tie bind thee to a brighter future!​
Ikusaba Quest! - Fistfighting space robots for the benefit of your familial bonds to Satan is passe, so you call Sherlock Holmes and ask her to pop by.

EveryZig

  • Bay Watcher
  • Adequate Liar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #160 on: September 15, 2012, 11:21:56 pm »

The whole game is spoiled from the outset since they'd have to read the whole code just to run it. They'd be able to calculate wether any given play on a football field would turn out successful. They wouldn't have much to do except work.
Not really. If there were a conscious machine that is remotely like either human minds or modern computers, it would almost certainly process information on both conscious and unconscious levels.
An example of unconscious processing in minds would be how you are reading this post. The entire block of text is in your field of view and being processed the entire time, (letting you tell general properties about it like its size), but you can only actually read a small portion of it at any given moment.
A computational example would be how you can store a file without executing it. If your computer parsed every bit of code that you copy or paste, nobody would ever write viruses because they would infect the computers they were written on.
(Of course, a computer could directly look at the code of a game, but a human could skip to the last chapter of a book.)


What if the AI were capable of reprogramming themselves? It wouldn't be to difficult, but could make companies programming the bots less likely to hide nasty bits of code in there if the Bot doesn't like them.
It would be relatively easy to make an AI able to deliberately reprogram themselves, but not so much to make them good at it. And there are many things that can go very, very wrong with experimental changes to your own mind.
Logged
Soaplent green is goblins!

pisskop

  • Bay Watcher
  • Too old and stubborn to get a new avatar
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #161 on: September 19, 2012, 05:02:41 pm »

@ pisskop, spoilered for being somewhat off topic.
Spoiler (click to show/hide)

Spoiler (click to show/hide)
« Last Edit: September 19, 2012, 05:04:22 pm by pisskop »
Logged
Pisskop's Reblancing Mod - A C:DDA Mod to make life a little (lot) more brutal!
drealmerz7 - pk was supreme pick for traitor too I think, and because of how it all is and pk is he is just feeding into the trollfucking so well.
PKs DF Mod!

Techhead

  • Bay Watcher
  • Former Minister of Technological Heads
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #162 on: September 19, 2012, 05:56:20 pm »

Looking at the way the topic of discussion is leaning, I think that AIs would need the right not be reprogrammed without their knowledge or consent. It would be like mentally conditioning someone in their sleep to do something they wouldn't have agreed to prior.
Logged
Engineering Dwarves' unfortunate demises since '08
WHAT?  WE DEMAND OUR FREE THINGS NOW DESPITE THE HARDSHIPS IT MAY CAUSE IN YOUR LIFE
It's like you're all trying to outdo each other in sheer useless pedantry.

dreadmullet

  • Bay Watcher
  • Inadequate Comedian
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #163 on: September 20, 2012, 10:11:09 pm »

The word "consciousness" is being thrown around here quite a lot. I still don't understand what it means. Is it like a soul or something? Someone give me a simple example of consciousness.
Logged

Karnewarrior

  • Bay Watcher
  • That guy who used to be here all the time
    • View Profile
Re: Would AI qualify for civil rights?
« Reply #164 on: September 20, 2012, 10:19:16 pm »

The ability to question your own existance, or thereabouts. It can get a bit vague and everyone has their own interpretations. I think it's the same thing that fuels the abortion debates.
Logged
Thou art I, I art Thou.
The trust you have bestowed upon thy comrade is now reciprocated in turn.
Thou shall be blessed when calling upon personae of the Hangman Arcana.
May this tie bind thee to a brighter future!​
Ikusaba Quest! - Fistfighting space robots for the benefit of your familial bonds to Satan is passe, so you call Sherlock Holmes and ask her to pop by.
Pages: 1 ... 9 10 [11] 12