Consider a software program that must calculate an inverse square root for whatever it needs to do. It calls a library function to do this. Now, obviously, at some point the abstractions run out and an actual machine has to execute instructions. It's fair to say that those instructions are part of the causal chain that leads to whatever outcome the program produces. But it also seems fair to say that those particular instructions were largely irrelevant, and that the program would be the same system even with a different implementation of the library, because at the program's level of abstraction the contents of the function don't form a part of the system, just that it function appropriately. The implementations may even return slightly different results, but as long as they are within the system's tolerance for error, it would still seem strange to say that the program is a different program, even if it has many different qualities.
Yes, "a map is a territory is a map" for a programmer, but what is it for a neurologist? My point is that while the above may be absolutely true for the purposes of computer science and AI research, it is absolutely untrue and misleading for the purposes of neuroscience and brain research, as I already tried to explain in a follow-up post to the wall of text:
If your goal is to simulate physical brains, you have to start from the very bottom and work upwards. Because, as I argued above, the chain of causality in a brain begins with concrete low-level neurochemical activity and ends with abstract high-level "processes," such as complex patterns of behaviour, and illusory epiphenomena like subjective consciousness, whereas the chain of causality in programming starts at the highest level of abstraction and proceeds downwards. The inverted top-down approach of computer programming simply does not work in the context of "brain simulation."
You can start at whatever point of the causal chain you'd like, but it's important to understand the implications of choosing a given level of abstraction as your starting point, because it will naturally restrict the range of methods and objectives available to you. I'm talking about the distinction between
simulation and
emulation here: the former is an attempt to copy and reproduce "the actual thing" in another medium, in the hope that this will teach you something new about the real object as it exists in nature, whereas the latter simply means reproducing the known functionality of a thing for practical purposes, which will ultimately tell you
nothing new about the thing itself.
Etymologically speaking, "emulation" implies
rivalry and
competition rather than mere imitation: it's about matching and surpassing the
performance of a given thing, which, by the way, is
precisely what computers have been doing to the mathematical performance of the brain for the last seventy years! Emulation starts with the premise that you're not even particularly interested in the nuts and bolts underneath the higher levels of abstraction---you just want to do whatever the damned thing does, as efficiently as possible, which is to say that the
least accurate (but functionally equivalent) method of emulation is often the fastest, and therefore most desirable (cf. console emulation on the PC).
On the other hand, the whole point of
simulating something is that you already know enough about the inner workings of the thing to (metaphorically) put it together yourself, but you want to fiddle with the variables and see what it does in action under all kinds of different circumstances (cf. weather forecasting). Simulation is all about imitating the original to the highest possible degree of accuracy (even to the extent of sacrificing performance), which necessarily implies starting at
the lowest possible level of structural detail, which, in the case of brain simulation, happens to be the (presently unattainable) molecular level. What I'm saying is that emulation and simulation are worlds apart in terms of their objectives and prerequisites, and the fundamental error of computational neuroscience à la "Blue Brain" is to happily confound these two approaches: start at a very high level of theoretical abstraction, and then mistake your haphazard congeries of wildly inaccurate neuron emulators for a proper simulation of a slice of brain. Way to succeed at
Bad Science.To reiterate: I can see the potential utility of AI research, and I can see the potential utility of brain research, but as I've said, I cannot comprehend why anyone would want to cross-pollinate the two into the bizarre hybrid that has taken over large swathes of both disciplines in the name of superlative transhumanity.
What I still don't understand is why anyone would want to simulate a human brain, though. It just doesn't seem to be worth the effort when you could rather be studying brains without the intent of putting them in a box, or building AIs from scratch without declaring them "human" for no obvious reason. I'm not a scientist, or even a philosopher of science, so I'm in no position to make prescriptive demands, but I think it might be reasonable to have some kind of a demarcation between neuroscience and computing, because conflating the two may create metaphorical chimeras that lead everyone astray.
These two disciplines could be working together on the problem of intelligence from the opposite ends of the abstraction spectrum, but at the moment both seem to be happily chasing their own metaphorical wild gooses in metaphysical Software Land.