Fakeedit: 12 new replies? I'll read them shortly.
(Also, emulating each neuron with an artificial neural network? Yeah, I'm of the conclusion that the best emulator for a brain is a brain. Any actual layer of abstraction is going to be significantly inefficient.)
Neural networks are usually just matrices of various sorts that get multiplied together with an input. They don't work on noncontinuous functions very well, and I'd imagine that the brain has a lot of nonlinear effects, so breaking down the brain into 10^12 neural networks makes sense to me.
I'm not sure I understand your methods, but while you can perhaps get 1x10
5 neurons per square millimetre (actually, it'll be in three dimensions, but for the sake of comparison I'm going with the planar footprint value I found for an unextraordinary neuron soma) and about 1x10
6 transistors for the same area (again, with the best planar footprint I found quoted, and, yes, you can use multiple layers here, as well), I'm not sure how close to emulating each neuron you could get with each provided a trainable 'network' of 10 transistors (including the background infrastructure required to
configure, or freeze in the eventual 'working configuration', that network).
There's the need to keep cells alive with (at least the equivalent of) a blood supply with appropriately nutritious chemicals and ones that allow the ion-channels to work, but then there's also the need in the electronic version to feed power to the gates (whether to each and every one via appropriate high-rail and low-rail connections, or to enough of them for the rest to power passively from the differential of voltages across the infed logical connections providing inputs). I think there's similar problems, except that a brain is already a tried-and-tested 'quite efficient, for what it does' system, whilst I'm sure it's still early days and future developments in design will perhaps provide something worthwhile, but... not yet.
Because without 'dumbing down' a brain (emulating broader structures, with some loss of 'resolution', and almost certainly losing some of the dynamic reconnectivity that a brain capable of continually adapting and re-adapting, rather than stagnating), I think it's going to be a poor copy, at best.
Which is not to say that it needs to be a mass-for-mass copy, but make it bigger (to give it the ability to be a more equal neuron-for-neuron copy, plus overheads required to allow the adaptability and resilience) and I'm not sure how we could compare the raw power consumption and maintenance for an electronic conglomerate theoretically of the exact same complexity of the human brain. (And, perhaps, the same speed. Not that speed matters if you're willing to "go slow" in your copy-brain's thinking processes, but it'd be nice to be
roughly equivalent.)
The advantages might well exist between maintaining such a 'brain' and the brain and body of an original person's full body indefinitely, but the degree of transhumanism involved takes us into additional philosophical questions.
Perhaps the answer is instead of 'reading' a brain in order to try to imprint into an electronic analogue (albeit a digital one
, do the same reading process but then just store that as plain data and then '3D print' the explorer's body from the ground up (a feedstock of the necessary base biogenic chemicals, that can be stored for the lifetime of the voyage), such that the brain-image is restored to the lifeless body-and-brain that you then resuscitate at the destination where the human explorer/ambassador can do his or her job. (Not possible at the moment, but then neither full on brain-emulation nor 'read-to-copy' technology is, so maybe when we can reliably read from the original enough details to describe a 'mind' we'll have worked out how to create a one or other variety of vessel into which to write those details back again.)