My argument is subservient to this principle: If you don't know how something comes about, if you do not what it even is, you have no justification to say how it comes about just because one other thing is associated with it. I don't think you know what consciousness actually is other than the experience of it, and you don't know how it comes about. All you know is that it is associated with the brain and possibly its structure. The options available to us isn't that it can be explained by the rules of the universe or it can't, since we don't know all the rules of the universe. The options available to us is this: We. Don't. Know.
Not only this, you're not talking just about the structure of the brain, but the computational aspect of the brain. Even if our search is limited to the area of the brain, there are still many competing hypothesis that try to explain consciousness along side computational theory.
I think EveryZig summed up my feelings on this pretty well, but I'll just add that while I use the term computation and structure, they're theoretically interchangeable. The brain's computations come about because of its structure, and a CPU's 'structure' is reconfigurable through its programming.
I think this highlights a fundamental issue with out discussion, whether or not consciousness is just computation, or a phenomenon (with you on the former and I on the latter as far as I understand). I can produce a truth table on a piece of paper in a simulation, but I cannot simulate a thunder storm to the point where we're all wet and electrocuted.
This is because truth tables are conceptual and can be represented by things inside a simulation that act just as well as the actual pen and paper thing, whereas thunder storms or aurora boreali in simulations will only exist insofar as they are symbols in the interface. There will be no actual thunder storm or aurora boreali even if there is one simulated.
Yeah, this I think is where we fundamentally disagree. You can't simulate a thunderstorm inside a computer that electrocutes and wets people outside, but it can certainly do that to simulated people inside. To them, it's no different.
So, the question then is if that thunderstorm is real in any sense. You could argue both ways. In the world it's in, it's very real. So what about a simulated consciousness? Is it real? It's another thing we'll never know. I believe that if consciousness is a computational function, then even when simulated it should arise in some form. I believe, but have absolutely no way to demonstrate, that the virtual consciousness is real and thus does experience its virtual world like we do.
Lets get back to your original question as we seemed to have gotten a little muddled up in this thought experiment. It's not that it is or isn't possible, but that you are begging the question. IE. you're assuming the conclusion on your way to prove it. The main issue in this line of thought in our discussion is whether or not consciousness comes about through computation, or whether computation is a large part of it. As a way of trying to argue for this, you gave the thought experiment of simulating a brain and bringing about actual consciousness. But in order for it to bring about consciousness through simulating the brain, it must already be possible to bring about consciousness through computation of the brain since this takes place inside a computer. Is this argument I laid out, what you intended me to understand?
I understand what you're saying, and yeah, I suppose I am making assumptions I can't substantiate. Since consciousness can't be tested for or demonstrably proven to exist in a system, there's no way we can be sure that a computer system can generate it. So I got off on tangents of how it should be possible, but that doesn't prove it is.
It's kind of philosophical at this point I suppose.
To be honest, the original thought experiment used the word "aware" instead of consciousness, but I think the two can be synonymous. I'm not disagreeing that we can make robots appear human and do things that humans do to the point where they're indistinguishable if you give them a proper outer case, I am disagreeing with the point that as far as we know of the brain right now, we don't know if we can recreate consciousness in something other than an organic brain that works almost exactly as standard. It might be for instance, that the tight and thick electromagnetic tangle that our brains have produces and traps a quantum phenomenon so long as it keeps the tangle above a certain threshold.
You need not recreate this tangle in something that's just a bunch of levers, or doors in a gigantic mansion. You might be able to produce it if you try to trap this phenomenon in a tangle as thick as an organic's brain, but it wouldn't be because of the fact that it's a computational instrument. You can do so without anything computing. This is what I meant at the beginning when I said that we might be able to reproduce consciousness in a machine of some kind but not necessarily one that does anything with computation. Granted, this consciousness probably can't remember anything, and the only thing this consciousness will experience is the immediate and current sensation of being aware that it's aware, but it can be categorically understood to be conscious.
Ah, I see what you're getting at, although I do doubt this is the cause.
Disproving that such a phenomenon exists or is responsible for consciousness is probably as impossible as proving that it can be produced by a computer, but in my opinion the observed nature and our understanding of physics implies that such a thing is pretty unlikely.
Even if it's not something like quantum mechanics (which are pretty unlikely to be responsible), it could
theoretically be some other phenomenon caused by known physical laws in some fashion we haven't observed yet. This seems like a pretty unlikely thing to me though, and using Occam's Razor I just narrow it down to most likely being caused by what we have already observed about brains: they compute stuff by using chemical and electrical signals, and nothing more.
To answer your question though, I think it exposes that what we think makes consciousness isn't sufficient at this point to explain it. Searle's Chinese room experiment certainly does not provide any answers, but I think it does do a good job of showing that computation alone isn't enough to warrant something being aware or conscious. I think what it also does is separate circuit boards, transistors, and the other guts of a computer from computation. This part I think is more valuable than the former advantage of this experiment for the reason that computation itself is not muddled up in the perceived mythical or magical guts of a computer. It seems often that what allows people to think that computers can be conscious is the perceived internal magic that allows for the wondrous things that computers do. It gets right down to what computation really is and eliminates the public's black spots in mathematical and computational understanding that people often appeal to intuitionally when pondering this issue.
I agree that it does demonstrate the importance that computation != consciousness. I think it's a necessary but not sufficient part of it, which I feel the thought experiment doesn't disprove. Computation, plus the proper
design can create consciousness, I believe. The design part is what we can't do right now.
I think we're at a misunderstanding here. Consciousness for me doesn't think, it is a passive thing. The mind thinks, but then the mind itself is dependent on there being consciousness. Thus, you may slow down the mind, without slowing down its consciousness. As per the consciousness in the machine model I gave earlier. We may capture consciousness in something, but it's possible for it not to have any access to memory, ability to experience anything more than experience, or even to think if it doesn't have the rest of the hardware to do so. It's possible to conceive for instance to have a conscious person seeing something, and only seeing something without any sort of brain activity. I don't have to think, or to presumably have any computations done in my brain for me to just see.
I'm currently not aware of any cases, but it would be quite exciting to hear about someone in a supposed coma not being able to think, yet still be conscious. Come to think of it, if their mental faculties are down but their consciousness is still going strong, then maybe their consciousness has no access to the part that encrypts memory either. Would be kinda scary in a I-have-no-mouth,-and-I-must scream,-but I-also-am-unable-to-think-that sort of way.
I'm not sure how you'd be able to recognize such a state, if it's even possible, but it would be pretty fascinating to read about.
In any case, since I'm a believer that consciousness is caused by computation in the brain I still believe that it's tied to the speed of thought. I can't really conceive of how the world would appear if I couldn't think, but still perceive. Consciousness as a concept is distinct from thinking, but the two are so strongly intertwined that I'm not sure they can exist apart from one another.
If nothing else, "computation" in the sense of information transfer has to happen for consciousness to work, otherwise its state can't change.
Ah, but we don't know all the rules for the universe. And given that we know so little about the brain, I think it's wishful thinking that we can definitively say that something like computation is all that's needed for consciousness, or even that its necessary for consciousness. We are saying much more than it can be recreated, we are prescribing a way in which we think it can be recreated without the justification to back it up.
This I think is summarized by EveryZig in that as far as we have any reasonable reason to believe, the brain only works on chemistry and macroscopic physics (that is, quantum mechanics don't play any part in it), which we do understand quite well and in theory could recreate the brain from. That's justification enough for me.
I agree with this with the exception that it will generate the same results. I don't think it will generate ALL the results of a human brain, that one being consciousness. To say that x will generate ALL results of a human brain at this moment, the only x that will do so is another human brain to our knowledge.
Well, 2 + 2 = 4 inside a computer and outside of a computer. I see no reason why using the same fundamental calculations and building up from there should produce different results, unless it is indeed caused by magic or some other force that isn't caused by calculation (which is what you argue). There's really no reason to believe it's caused by such a thing though.
Picking one thing that neurons, electromagnetic fields, or fixating on the chemical interaction of the brain as the sole fundamental thing in how it works, just seems like saying too much given our lack of understanding of it. It is in danger of committing the same mistake as Phrenology, or the examining of the shape of people's heads to try and understand the function of the human brain. It's putting too much in one basket when the fundamentals and limits of the basket isn't know.
We don't
know that there's no unseen force causing consciousness in the brain, but our current understanding of it, chemistry and physics casts a lot of doubt on the presence of such a thing, so there's no real reason to consider it. I'm aware that there are theories on quantum mechanics being responsible for consciousness and all of that, but again our current understanding of the way these things work casts significant doubt on that theory. I'm sure there are more such theories, using different forces (like electromagnetism), but they still also seem unnecessary. We know the brain does computation, so it seems more natural to assume that its behavior comes from that rather than some never before seen phenomenon of electromagnetism or another force.
I'm willing to believe that something is is responsible for consciousness or its function, if we can detect such a thing or have reasonable evidence for it. But, so far we don't, so for now I prefer to stick with what we do know and extrapolate from there. If it turns out I'm wrong, then I suppose my theories go the way of phrenology and a better theory will replace it.
I'm aware that your stance on computation being at best a guess at how it comes about. If my tone or something I wrote made it seem otherwise, I apologize, I did not mean to infer something you did not mean.
No, no, I was just clarifying that I realize I don't have all of the answers. Until I'm given reasonable evidence to the contrary though, I'll stick with my theories like you're sticking with yours (for the same reasons I'm sure, since neither of us can really provide any hard evidence to back ourselves up).
If it's a real AI, it wouldn't look or act anything like a human. I could see them being used on spaceships as basically co-pilots for the whole crew; they need to be somewhat intelligent to make snap judgments alongside humans and not get bogged down in the kinds of rules-lawyering that bots today would. But they wouldn't have much except their job to do? What, are they going to play Skyrim? The whole game is spoiled from the outset since they'd have to read the whole code just to run it. They'd be able to calculate wether any given play on a football field would turn out successful. They wouldn't have much to do except work.
Careful, that's not as simple as you might think it is. You can't examine a program and arbitrarily decide if it will run forever, for example. Can't be done. A game has a lot of things about it that make an AI unable to just statically analyze it and know what's going to happen. So yes, it could derive enjoyment from it if it's programmed to enjoy games.
And while I agree that we
could make AIs that are slave minded by design, it should be possible to create them human minded instead, and that's what the question really asks. What do we do about
those AIs?