Oookay... you assert that the mind is analogous to computer software because it is, in your own words, "a system of ongoing processes taking place in the medium of brains," and more specifically "information ... stored in the arrangement and function of its component matter." There are a number of hairy philosophical problems embedded in that definition, but I think the most relevant ones in the context of computing are the concepts of "process" and "function." The gist of your argument seems to be that logical structures like processes are substrate-independent and exactly reproducible in another medium, and they therefore deserve an ontological status of their own – but surely you agree that all processes have to be stored on some physical substrate at all times, be it 1s and 0s or marks on paper? (It would be rather silly to claim that a process rests in the Platonic heaven of ideas whenever it isn’t running.)
You asked whether I think the process of my web browser is physically located somewhere on my computer, and I have to say yes, in the same way as the "concept" (whatever that means) of web browser is "physically located" somewhere in my brain. When the process called Firefox isn’t running, it is logically a bunch of metaphorical "files" sitting in a metaphorical "folder" called /usr/lib/firefox, and those files are linked to certain inodes pointing at certain physical locations on the hard disk. Computing makes a natural distinction between logical and physical structures because binary data on a computer system has to be symbolically represented in order to be human-readable, but this does not change the fact that the magnetic grains on the hard disk platter are real physical things – just like synapses and neurotransmitters – and replicating the logical structure of the filesystem
sans data will not reproduce a working web browser, just as mapping the connectome will not reproduce a functional mind. The situation is even more complicated when we are talking about running processes: One could argue that the idea of reconstructing the mind
qua process would be like dumping the RAM contents of a computer running Firefox and trying to reverse-engineer the program’s source code – this effort would run into severe
underdetermination problems even in the computing medium, and the neurological medium would be much,
much worse, I suspect.
You see, the problem is that whenever you talk about this or that real-world process as a "real" object, you are in fact talking about the symbolic abstraction, via language, of a complex network of causal relationships within a physical medium. There are (at least) two reasons for this: First, you do not have all the relevant data about all the parts in that process, and secondly, you have absolutely no direct empirical data of the causal relationships involved. Causality cannot be directly observed in action – it has to be reconstructed after the fact by weaving discrete observational statements into a web of cause and effect, and there is always an
enormous amount of leeway between the observations and your causal theory.
I already pointed out how difficult it would be to reconstruct the causality of a running program, but the crucial difference between brains and computers is that in the latter case we don’t
have to reconstruct anything. Computers are purposefully designed to accept simple instructions that make all causal connections completely explicit, and the problem of underdetermination is non-existent when you are reading scrupulously commented source code – something that neuroscientists will never (? ? ?) have access to. That is to say that computer software is a poor analogy for the "processes" of the mind because the so-called brain software has to be painstakingly abstracted by neuroscientists, with a nigh-infinite number of equally valid theories to choose from, whereas computer software is
already a purposeful abstraction by design, and it obviously
has to have a documented and perfectly valid way to construe the processes that are running on it.
Another problem is that even the causality between software and hardware seems to be reversed in the computing analogy, in comparison with the brain: Computer software is unquestionably a series of explicit instructions that
cause the hardware to perform the requested operations, whereas the mind
qua process could just as well be construed as an epiphenomenon of the neurological brain hardware. It’s often said that consciousness is epiphenomenal and I very much agree with that idea, and I see absolutely no reason why these abstract mental processes could
not be regarded as epiphenomenal by-products of physical processes. Because they
are abstractions, after all, and they are never observed in nature
except in conjunction with physical brain activity.
Computer software is naturally substrate-independent because it is already an unambiguous symbolic representation, and symbolic systems like languages and math are ontologically
weird because they exist as social conventions that are seemingly not tied to physical substrates. It would not be too far-fetched to assume that words and programming languages are stored physically in individual brains, but nevertheless, the distributed nature of this massive social system called language gives one a
very strong impression of disembodied symbolic "things" floating around in the aether – which is, to be honest, the first explanation that comes to mind when someone talks of substrate-independent logical objects.
I find this topic deeply fascinating and there is much more to say if one really gets into it, but for now, I ask anyone who accepts the software metaphor at face value to ask themselves: "Am I being misled by language here?"