Maybe I haven't been as clear as I should have been. I have conceded that the brain has something to do with consciousness, but to say that we only have to replicate the neuro-interconnections of the brain I think is jumping the gun based on what we know about consciousness and our brain structure. The inter-connectivity of the neurons might be entirely subsidiary to how consciousness works. Breaking down consciousness into one part of the structure where it's housed doesn't seem to me to be a particularly good idea, especially when there are other explanations that compete against each other.
Regarding the claim that it either is the very structure of the brain that causes consciousness or it being magic, I think you're posing a false dichotomy. I don't think there is sufficient enough understanding of how neurons and computational systems work that we can equate them to each other at the level of human brain function. At the level where information is passed around and saved, sure, but much more than this then I think we're simply going on hypothesis. Other than these options, I think there is a more justified one- we don't know what causes
consciousness or even what it is.
What else
could be the cause of it? I've basically stated it has to be caused by physics or magic. I don't see how it could be anything else. It's either a product of the way the rules of our universe work (i.e. the way neurons interact with each other) or part of something incomprehensible because it's outside the rules of the universe (which as I've stated seems unlikely to me).
I don't disagree that we can simulate every single neuron in a brain, I am skeptical on there being any consciousness at all.
I still think this argument begs the question, to beg the question is to assume the conclusion before proving it. In your example, this hypothetical world, where we're simulating every single neuron in the brain, it's already possible to tell whether or not there is consciousness or not in that computer model (rather than, say, the conditions in which consciousness would arise). This means that consciousness is a computational thing already even before we detect consciousness. Whether or not we detect consciousness at that point would be irrelevant because it's already stated fact in that hypothetical world.
That's going to be a fundamental problem with any discussion on creating artificial consciousness, since we can't ever tell if it's present or not. All I'm stating is that if we mimic the way the brain works down to its most fundamental levels, then it should logically do the same things that a real brain does, and generating consciousness is one of those things. We won't be able to know for sure, but it seems reasonable to me that it should. Actually, I suspect we might be able to tell in this case, because I think consciousness is a very important part of what makes a human's thought processes work like they do, so if we created an artificial brain that tried to think like a human but lacked consciousness it might produce different results. I obviously don't know this, however.
If, for the sake of the argument it is possible it fails to produce consciousness, the answer is not automatically magic. A simulation can only show the limits of what the programmer understands of the world. Assuming we allow someone from the 1200s to make a simulation about how something works in the world, the simulation produced by someone from 1200 is going to be in stark difference to the ones we make.
We just don't know enough about consciousness to say definitively that in this simulation consciousness would be produced rather than we recognizing the conditions in which consciousness would arise.
Of course, which is why I'm still talking in complete hypotheticals. Hypothetically if we understood enough about the brain we could create a simulation that replicated its results perfectly. We can't do that now, but one day I don't see why we wouldn't be able to. Simulating the brain's quantum mechanics is about as fundamental as it can get, which should produce the best simulation possible, no matter how much we learn about the way the brain works abstractly.
If we were to go back to the Chinese room thought experiment, it would go like this: a cursed immortal inside a time dilation room where inside time goes by much quickly than outside time, is being handed a huge stack of information to compute. None of it he understands because they are in symbols he doesn't understand, but can nevertheless provide output because of a basic instructions manual about how to deal with the symbols. The pieces of paper this person is churning out will appear to the people outside this box as if the computational system inside the room is simulating consciousness (they're accountants, they can read large piles of paper work quite quickly). No where in he room does consciousness arise. Replace the person with a system of levers and chains and you can get the same result. Replacing it with a computer would be no different. The paper that comes out of the room is not conscious, the thing inside the room need not be conscious, and neither is the instructions manual. The three of these put together does not bring into existence a being that is conscious.
No where in he room does consciousness arise.
How do you know this? Is it impossible that a person running a "consciousness program" in their head is creating a second one in their head? Is it impossible that a series of levers and chains running such a program creates consciousness? Where does the problem lie? Is it because there's no single point of computation (a brain)? Is it because of the lack of neurons? Is it because you can't imagine a disembodied sense of self somewhere in the mess? As I've stated before, I still can't imagine how our brains do that, but they do. And there's currently nothing we know about our brain that says that you can't replicate its function with chains and levers.
Do you equate the mind to consciousness? You can slow a mind down, but I'm not sure you can slow down consciousness, you might be able to slow down the realization that the entity is experiencing things, but the experiencing of things seems to be instant checksum from my understanding of it. In any case, I don't know if we can confidently say that speed has anything to do with it coming into existence if that is what you're saying at all (it seems I'm quite bad at reading comprehension).
That's sort of what I'm saying, but not really. The speed at which the consciousness "thinks" would
have to be tied to the speed at which its brain functions. Slowing that down or slowing down its speed of perception would alter the way that it perceives the world, and likely cause its behavior to be different from ours. Everything appears instant to us because, well, that's the speed that we think at. In any case, you could theoretically slow it down as much as you wanted, but it probably becomes increasingly less like us as you do so (unless you slow down the world around it equivalently, at which point it's no different.)
Really, this all just came from a scenario I posed to myself: what happens if you "single step" consciousness like a computer processor in a debugger? If it only experiences the world one "instant" at a time, does it still exist, or does the consciousness break down? That's a tricky one, since it's pretty hard to imagine.
I think the issue is more of, we don't know what makes X, in fact we know very, very little about it. But if we make process Y very complex it would be able to reproduce X. I think this argument would only work if we have some understanding of what X is, not even about what produces it, of which we have very little in this specific circumstance.
I don't disagree that we don't know what makes X here, where we disagree is that you seem to think that no matter how much we understand about X we
cannot make Y produce X. Based on the fact that Y could be built upon the rules of the universe (going way back to the start of my post), I don't see how this could be the case.
You also missed the point, in fact you highlighted mine. Just because you can make something increasingly complex, it does not mean that it will achieve X in the future. For an argument on continuing complexity to achieve some sort of phenomenon in the future, the argument will have to already know about how it will achieve this, if not physically, at least theoretically step by step.
What I'm trying to say is that we can create consciousness without having to make it be a human consciousness. That's what I mean by equivalent. You can't make a human brain out of transistors, because human brains are made of biological matter. You can however make a system that does the exact same things, but with transistors. It should then produce the same effects, including generating a consciousness like ours. If replicating the way that the brain functions doesn't produce consciousness then I just don't know what would.
I don't make any claim that I or anyone else knows how to create consciousness, so no, I can't state with absolute certainty that we'll be able to replicate it with computer systems. Adding complexity alone absolutely will not be enough. In fact, it may be possible to produce consciousness with computers of current complexity. We just don't know the magic combination that produces this yet (or if we do we can't recognize it in any case). Brute force simulation of an existing system that we believe to be conscious (a brain) is the best we can do right now, and for that we just need more processing power.