I think there are some steps missing in your deduction, we know that the brain has something to do with consciousness, but I don't think we can confidently say what is it about the brain that causes consciousness. I don't think the argument that holds whenever there are brains there is consciousness, therefore the two must necessarily be caused by the other, or at least, the physical structure of the brain itself is the cause and origin of consciousness.
I look at it this way: if it's not the structure of your brain that causes consciousness, it must be magic and thus we have no hopes of repeating it. I don't believe in magic though, so I say it's down to the structure of the brain (which is nothing but a computer in its own sense).
In fact I think this point sums up the whole discussion pretty well. We don't know what causes consciousness, therefore I don't think we should say computers definitely will gain consciousness at some point in their development, though of course I take a stronger position than this in saying that they never will. We can simulate a machine as a seamless human being living in a human society, but whatever the means that is used to do so, does not mean that said means will avail itself to every facet of human existence, in this particular instance consciousness. It is possible to make a human like object from nothing but metal, string, and power box. This does not mean however, that even if the central ball of string that handles inputs and outputs works on the principles of binary, that the ball of string in its head will contain consciousness. Likewise, no matter how big you make this ball of string, I (maybe not you) would be understandably hard-pressed to claim that consciousness would arise from this clump of knots.
Saying anything definitively about whether we can create consciousness is of course taking a leap of faith since we only have one example to work from. The fundamental part of my entire argument is that consciousness
has to arise from the flow of information in some form, whether this is high level information of the form 2 + 2 = 4 or implied information such as the transfer of energy between molecules in a neuron's mitochondria. If this doesn't cause consciousness, or if recreating this information flow doesn't cause consciousness, then the only other explanation is magic, and I don't believe in magic.
Heck, I'm hard pressed to believe that a brain can create consciousness when I really start thinking about it. Nevertheless, it does do this, and if it does do this as a pure function of information transfer, then in theory it
should be possible to create it with
any computing medium.
In theoretical mathematics, there is a very important distinction made when you create a symbol for something. The symbol or name that represents the object you're describing, and the thing you're actually describing. All in all, I think you're begging the question, the issue I'm criticizing is computers being conscious. When you say that this is entirely possible because we're just going to model is to assume that it is already possible before demonstrating it. How do you confidently say you can simulate it when you have no idea how it arises? For someone to simulate consciousness in a model of brain, that person must have already known enough of how the brain works in simulating consciousness to do so. In fact, it must be assumed possible in this hypothetical world this person lives in. QED, you're begging the question.
My entire argument hinges on the idea that while we don't understand well enough right now how a brain works to recreate one, it should be
possible to do so once we do understand this. The entire point of saying that we can simulate a brain is to break down the seemingly impossible task of understanding every last nuance of how a brain works in its entirety at any given moment. Instead, all you need to do is model the system in which it works (that is, how the neurons connect and share information), fill that in with a real scenario (a brain map from a human), then let it run. Its behavior should theoretically be indistinguishable from the original human.
If this doesn't create consciousness, it only leaves two possibilities: that consciousness can only be created by neurons (or presumably other organic matter), or it does indeed have a "magical" component somehow.
If it turns out that consciousness only arises from neurons, then this would have to be due to their internal structure for one reason or another, which we could in turn simulate. We can continue going down the chain until we're simulating quarks, and if we don't produce consciousness at that point then consciousness must only arise from something magical.
I don't think you understand the implications of what consciousness endows to a being. Given a system that isn't conscious and is programed to ask for rights against one that is conscious and is programmed to ask for rights. The latter will have really meant it. It will have meant it in the same manner as you and I asking for rights regardless if it can intellectually be capable of anything else.
I understand this, but I maintain that consciousness on its own is not the end-all be-all of whether such a thing deserves rights. It's conceivable that you could build a conscious system that had no emotion, for example. You can't be mean to it, since it doesn't feel emotion. If it also does not learn anything, then it is not an irreplaceable individual. Destroying it means nothing aside from the loss of property.
Could you explain what it is in theory that supports your claim? I don't actually see what in any computing theory that would suggest consciousness be possible to produce out of a series of transistors (and neither have I read any compelling explanation accurately depicting one originating out of a clump of neurons either for that matter).
That's pretty much the basis of my argument: we see no reason why consciousness should arise out of neurons either, so with a bit of a stretch of the imagination it seems that it should be possible to do with other systems. Neurons and transistors can compute the same types of functions, so it seems logical that they can both do the same things if arranged properly.
I would disagree here on the basis that you don't seem to be talking about consciousness any more when you talk about computational speed. Mathematical computations to the best of my knowledge is just dealing with inputs and outputs. You may theoretically slow down everything in a brain to see what this person will do in the next few seconds based on the position of the neurons and the chemicals in them, but I don't think you can actually see consciousness, IE, the person experiencing doing these things.
On top of this, I don't actually know what speeding stuff up with have generate any appreciable difference. To be frank, you're saying that for consciousness to come about, we just have to speed up several trillion transistors fast enough. I don't think I need to point out just how much more of an explanation is needed to make this work, particularly the point where speeding something up and you'll get something entirely new out of something.
No, that wasn't really the point I was trying to drive home. I was speaking all about why it seems ridiculous, and the computing speed is to me at least part of it. The relation of consciousness to computing speed isn't critical, but I do think it's important to consider. It
may be a necessary (or near necessary) condition, but it is clearly not sufficient.
In this case I believe the speed of thought, as it were, is going to be important to have a consciousness anything like ours. If it runs considerably slower, then the way it perceives the world and reacts to it is likely to be a bit alien to us. Slowing the inputs down to an equivalent speed of course removes this distinction entirely.
You could make your transistors more complex, but I don't really think this is a particularly good argument that computers can gain consciousness. Claiming that when we make it more complex it will accomplish something of another logical order seems to be lacking a lot of explanation in the middle about how this complexity makes X possible. Why yes, as it gets more complex, it will get new parts and perform new functions. Why do you know that one of these future functions is the one that's being doubted?
Complexity is again a necessary but not sufficient condition. It's the same with any modern non-trivial program. Adding small pieces together can rapidly assemble something that is much more than the sum of its parts.
I guess I don't see how transistors and neurons are fundamentally different here. Alone they're completely useless, but when you put them together they can do amazing things. The way they do computation is different, but that's a matter of transforming their functions into the other form.
Suppose I made a notch in my door. As I put more notches in it, it will become more complex. It will gain new parts and new features that the previous door didn't have. Now suppose I said then that because the door has the ability to gain new parts and features as I keep on adding notches to it, that it will have the ability to turn into the real life Jimi Hendrix if I added enough notches into it. This is not a good argument. This is an argument that the door and the computer will in the future will likely develop new advancements, but I don't think this argument actually leads to something that doors and computers have never exhibited any inclination in its history of development.
A door, no matter how many notches you add to it, can't become the real life Jimi Hendrix. It's not a sufficiently complicated system to do something like that. For that matter, you can't do that with metal either. Jimi Hendrix isn't made of metal. A robot Jimi Hendrix is not Jimi Hendrix.
That's also not the point. The point is that you can make an
equivalent Jimi Hendrix robot (equivalence here being whatever you want, but I'm going with equivalence being his mind).
Nerve cells are never [just] on or off. In fact, more and more research indictates that nerves cells are not the only cells capable of transmitting the data within your mind.
You can convert digital values to analog signals, so that in theory shouldn't matter.