We are blinded to this difference by the fact that the same sentence, "I see a car coming toward me", can be used to record both the visual intentionality and the output of the computational model of vision. But this should not obscure from us the fact that the visual experience is a concrete event and is produced in the brain by specific electro-chemical biological processes. To confuse these events and processes with formal symbol manipulation is to confuse the reality with the model. The upshot of this part of the discussion is that in the sense of "information" used in cognitive science it is simply false to say that the brain is an information processing device.
It seems like this assertion is questionable too. Processes in a silicon computer propagate by physical electrical signals too, e.g. actual physical processes. So Searle's argument that brains aren't computers because they are "physical" seems to be fairly baseless. How are electrons, protons and neutrons actually conscious? This almost seems like voodoo science from Searle.
What the fuck is a "biological process" and why is it special and separate to chemistry and physics? It's this sort of unscientific bullshit that Searle's argument devolve into. He prioritizes things being "biological" as if that makes them
special, which is
vitalism.This unsubstantiated exaltation of things for being "biological" is a major flaw - since his basic argument is that biological brains have a special status compared to ones made of silicon, then saying that they are special
because they are "biological" is the classical fallacy of circular reasoning. In fact, it's a long stretch to say that we couldn't work out an equivalent set of symbol manipulations that each neuron is in fact carrying out. Neurons aren't conscious: the "physicality" is a red herring. It's the patterns and relationships that matter between neurons. Searle's argument is basically that consciousness is embedded in biological matter, therefore it's "real" vs one in silicon which is "not real", and that because
we can't work out a "symbol table" for what a neuron is doing, therefore no such table could exist. All of those things are questionable assertions.
It also fails because it uses the exact same logical fallacies he calls out in this very paper. e.g. he calls out "syntax and symbols" as being purely
human constructs layered on to of the real physics, yet
here he is claiming some special unique properties and concrete reality for "biological processes" a term which is completely
meaningless if we're talking at the level Searle claims to be talking.
Another core Searle's argument is that a simulations aren't "real" therefore they don't have the properties of "real" things. However, the question is whether e.g. a simulated heat "beats" or not. If you have simulation of an animal, with a simulated heart, then the simulated heart keeps them alive
in the simulation. Without the heart, you simulate the animal dying. Whether the heart "really" kept the simulated animal alive or not is not a valid point of view: because what matters is how the elements of the simulation
relate to one another, not how they related to things
outside the simulation. The
property of hearts that they
beat and
keep animals alive are higher-order properties that only make sense to talk about
in relation to the simulation's internal relationships. Arguing that it didn't "really" keep the animal alive misses the point completely. it was real enough for the purposes of the simulation. It's no different to brain processes vs heart processes.
Personally, my view, is that a simulated brain with simulated processes doing advanced "brain stuff"
identical to a conscious human would in all likelihood
actually be conscious. This is an opinion however, because I strongly believe that conscious evolved because it's
necessary to the correct operation of the brain. Conscious brains
evolved because they are the most efficient use of the resources to achieve the task of controlling animal bodies. If you
could build a
correctly simulated brain in a computer that learns the same, and does the same functions as a human does, yet wasn't "conscious" then that would imply that conscious was
unnecessary in the first place and would lead you to question why we humans evolved to be conscious when a perfectly functional "philosophical zombie" would be a more efficient use of the available resources. I don't believe "philosophical zombies" would
actually work. That's my gut feeling for reasoning that a
correct simulation of the brain would also be conscious.
This is something that we might actually have to grapple with because of advances in computing power decades from now. e.g. would a brain-scale neural network that grows the way real brains grow, that
swears it is conscious
actually be conscious or would it just be a "philosophical zombie? This might be the equivalent of Catholics who back in the 1960s were warning people that children born from IVF would "lack souls" because they didn't have "original conception" which is where souls come from in Catholic dogma. However, the problem of people who "lack souls" walking around, falling in love and having their own children, would of course be a
huge problem to explain in Catholic dogma, so it was dropped pretty early on. The idea of simulated brains "lacking consciousness" because they're bereft of the "biological spark" seems to be an argument born from the same cloth.