The same argument applies to neurons. A single neuron isn't conscious. Ten neurons aren't conscious. Where does it suddenly become conscious? That's the issue I have with the idea that the structure of the brain creates consciousness; there's no clear dividing point, no fundamental law we know of that says consciousness exists above this threshold.
I think there are some steps missing in your deduction, we know that the brain has something to do with consciousness, but I don't think we can confidently say what is it about the brain that causes consciousness. I don't think the argument that holds whenever there are brains there is consciousness, therefore the two must necessarily be caused by the other, or at least, the physical structure of the brain itself is the cause and origin of consciousness.
In fact I think this point sums up the whole discussion pretty well. We don't know what causes consciousness, therefore I don't think we should say computers definitely will gain consciousness at some point in their development, though of course I take a stronger position than this in saying that they never will. We can simulate a machine as a seamless human being living in a human society, but whatever the means that is used to do so, does not mean that said means will avail itself to every facet of human existence, in this particular instance consciousness. It is possible to make a human like object from nothing but metal, string, and power box. This does not mean however, that even if the central ball of string that handles inputs and outputs works on the principles of binary, that the ball of string in its head will contain consciousness. Likewise, no matter how big you make this ball of string, I (maybe not you) would be understandably hard-pressed to claim that consciousness would arise from this clump of knots.
Think of it like this: any physical system can be modeled and simulated by a computer given a sufficient understanding and enough processing power. That goes all the way down to quantum mechanics, which is probably well below the level of necessary simulation for consciousness. If nothing else, surely it's possible to simulate the neurons in a human brain, and thus the brain and all phenomena associated with it, right? If not, why not?
In theoretical mathematics, there is a very important distinction made when you create a symbol for something. The symbol or name that represents the object you're describing, and the thing you're actually describing. All in all, I think you're begging the question, the issue I'm criticizing is computers being conscious. When you say that this is entirely possible because we're just going to model is to assume that it is already possible before demonstrating it. How do you confidently say you can simulate it when you have no idea how it arises? For someone to simulate consciousness in a model of brain, that person must have already known enough of how the brain works in simulating consciousness to do so. In fact, it must be assumed possible in this hypothetical world this person lives in. QED, you're begging the question.
To the point whether we would recognize it, I think you're missing the point. If such an event did take place, would you concede that these billions of people phoning each other up in this way constitute consciousness? The issue isn't whether the people at that moment recognize it as consciousness, rather it's a problem posed to you, as to whether you would recognize it as consciousness with the intention of the author being that this is ridiculous if you DO say that this method will result in a consciousness coming into being. On top of this, there's also the implication that if you do recognize this as something that grants consciousness, then shouldn't your computer also be subject to the same view? If not full human rights, perhaps the same rights as a dog or maybe just live stock.
As to whether or not my computer deserves rights... well, I don't think so. Even if it was conscious that alone doesn't imply that it should deserve civil rights. I think it's plausible to conceive of a conscious computer system that has no individuality and no ability to suffer. If the system doesn't suffer negative emotion and nothing unique is lost when it is destroyed, there's probably no reason to protect it with civil rights. This would be a pretty silly thing to create, but should in theory be possible. If my computer is conscious on any level, I believe it would be like that: no reason to protect it since it can be completely replaced without loss (theoretically) and didn't suffer in its destruction.
I don't think you understand the implications of what consciousness endows to a being. Given a system that isn't conscious and is programed to ask for rights against one that is conscious and is programmed to ask for rights. The latter will have really meant it. It will have meant it in the same manner as you and I asking for rights regardless if it can intellectually be capable of anything else.
To me there is a definite separation between the computing and conscious parts of a human mind, to the point where I think that in a computer system the consciousness itself is probably a separate layer or program. In effect it's not so different from any other infinitely looping program in that it deals with inputs and produces outputs. How you create subjective experience here is the hard part. How do you actually get an entity to reside behind the cameras and auditory sensors of a robot? I'm not sure, and nobody else is. Our current software development strategies and systems are probably insufficient, but that shouldn't stop us in theory.
Could you explain what it is in theory that supports your claim? I don't actually see what in any computing theory that would suggest consciousness be possible to produce out of a series of transistors (and neither have I read any compelling explanation accurately depicting one originating out of a clump of neurons either for that matter).
You just misunderstood what I was trying to say (I probably could have worded it better). I was talking about halving the computational speed, not the actual "level" of consciousness. The point I was trying to make is that as you slow down the speed of computation, it looks less and less like consciousness. Getting back to the telephone scenario, it goes so slow that perceiving any consciousness there would be pretty tough, and it is in turn likely pretty different than we'd expect simply because its experiences would be so much slower. In a way, I often wonder if consciousness is an illusion of sorts brought on by the apparent continuous nature of our perception. That's hardly the whole puzzle, but maybe a small part of it.
I would disagree here on the basis that you don't seem to be talking about consciousness any more when you talk about computational speed. Mathematical computations to the best of my knowledge is just dealing with inputs and outputs. You may theoretically slow down everything in a brain to see what this person will do in the next few seconds based on the position of the neurons and the chemicals in them, but I don't think you can actually see consciousness, IE, the person experiencing doing these things.
On top of this, I don't actually know what speeding stuff up with have generate any appreciable difference. To be frank, you're saying that for consciousness to come about, we just have to speed up several trillion transistors fast enough. I don't think I need to point out just how much more of an explanation is needed to make this work, particularly the point where speeding something up and you'll get something entirely new out of something.
I think Eagle_Eye covered this pretty well. Adding more transistors doesn't fundamentally change a processor, but when you get enough it can certainly do more stuff. There are a minimum number needed to create a binary adder, for example, and once you get enough now the processor can add if it's built correctly.
You could make your transistors more complex, but I don't really think this is a particularly good argument that computers can gain consciousness. Claiming that when we make it more complex it will accomplish something of another logical order seems to be lacking a lot of explanation in the middle about how this complexity makes X possible. Why yes, as it gets more complex, it will get new parts and perform new functions. Why do you know that one of these future functions is the one that's being doubted?
Suppose I made a notch in my door. As I put more notches in it, it will become more complex. It will gain new parts and new features that the previous door didn't have. Now suppose I said then that because the door has the ability to gain new parts and features as I keep on adding notches to it, that it will have the ability to turn into the real life Jimi Hendrix if I added enough notches into it. This is not a good argument. This is an argument that the door and the computer will in the future will likely develop new advancements, but I don't think this argument actually leads to something that doors and computers have never exhibited any inclination in its history of development.