Apologies for the length here, I tend to get carried away when I start typing...
As to the computer right now emulating it entirely, I would disagree. The mind is several levels far more complex than what we have in front of us at this point.
I never said we could do it
right now. This is all theoretical anyway, if it takes a computer from 100 years in the future, that's what it takes, but it's still possible.
To the point whether we would recognize it, I think you're missing the point. If such an event did take place, would you concede that these billions of people phoning each other up in this way constitute consciousness? The issue isn't whether the people at that moment recognize it as consciousness, rather it's a problem posed to you, as to whether you would recognize it as consciousness with the intention of the author being that this is ridiculous if you DO say that this method will result in a consciousness coming into being. On top of this, there's also the implication that if you do recognize this as something that grants consciousness, then shouldn't your computer also be subject to the same view? If not full human rights, perhaps the same rights as a dog or maybe just live stock.
I was wondering why you posed the question that way, this makes more sense. This actually comes back to something I've pondered recently, but the basic premise of my answer is this: if it's possible to generate consciousness in a computer (which I believe it is), then it should be possible to get the same effect no matter what medium is executing the program. So, yes, you should get the same output from a trillion (maybe more would be needed) people phoning each other to pass information.
Note that I used the word output here instead of consciousness. Whether this mechanism actually produces consciousness is indeed quite difficult to say conclusively, because the speed of information passing is so slow. On the other hand, it seems logical to me that if you slowed someone's perception by half they are still conscious, right? If you continue to do this over and over again until the consciousness's reaction time is now centuries instead of milliseconds, it certainly doesn't
appear conscious anymore, but when did the consciousness end? If consciousness can be produced soley by processing of information, such as by a computer, the speed and medium of the computation shouldn't matter in theory.
In a way, I've wondered if consciousness is sort of an illusion brought on by the apparent continuous nature of our perception. Clearly there is more to it than that, but that I believe is a small part of what makes consciousness what it is.
As to whether or not my computer is conscious on any level... well, I've wondered about that and in a way I sort of believe it might be on a very weak and fundamental level. You
could argue that any decision making process is the fundamental building block of consciousness, and if that was true then the processors in my computer are conscious on some trivial level. They have to decide what to do based on the instruction streams passed through them. In this case it's a pretty trivial computation on their part since they have hardwired logic pathways for whatever instruction they're executing. The programs running on top of the processors could be seen as another layer in the consciousness, or another consciousness altogether.
No, I don't believe my computer sits there and ponders anything other than the programs run to it, is self aware on any level, or any of that. It's just that what constitutes consciousness becomes pretty muddy if you do subscribe to the idea that computers can create it. Perhaps consciousness is something that only arises with deliberate effort and a minimum number of conditions that no modern hardware meets. Or perhaps it's more of an emergent property of the complexity of the decision making processes. It's probably some combination of the two, which means that my computer is probably not really conscious even on a trivial level.
But anyway, that's beside the point I think. Even if my computer was conscious, it doesn't need any civil rights of any sort because it can't suffer and has no special "individuality" that is lost if it is destroyed or mistreated. I'd argue that conscious AI running future intelligent missiles is completely fair game, even if it's destroyed in the weapon impact. If it is designed to simply make difficult decisions and has no emotion or individuality or the like, there's really no reason to protect it with rights. Of course it would be kind of silly to make a conscious AI for this when simpler systems would do, but this is all pretty academic anyway.
In short, I don't think consciousness alone should be the deciding factor on whether something is protected by civil rights. This does however make the question even more complicated, because now you have to decide based on the qualities that the consciousness exhibits...
If I built something right now that acts and speaks like a human being, goes to parties and work and such, but I can convincingly demonstrate to you that this is merely a flesh bot, that there's really nothing inside that actually feels or is conscious, and is merely acting based on the set of conditions I gave it. Would you really treat it the same forever and ever as a human being? There's no real reason for you to do so other than convenience. It would not be wrong, legally or morally, generally speaking to beat the crap out of one for fun, other than, perhaps, proprietary rights. Shoving some of one's emotions to one side can be quite easily done by most human beings.
In the end I (and I imagine most people) would eventually agree that there's no reason to treat it with the same level of respect and protection that real humans receive. However, the point is that if you had a robot that looked just like a human and acted just like a human, who pleaded for its life when asked to do something that would surely destroy it (assuming it's a perfect replica of a human), then most people would give pause even if they knew it was not conscious. I know I would. Some might could be convinced to go ahead and force it to do whatever, probably including me, but humans are wired up to assume anything that appears human on this level
is human.
The level of doubt and general lack of understanding of the situation by people as a whole would probably mean it's simpler to just treat these things as humans for all intents and purposes.