The point of mine (and LB's) post was that there is technology to take information from the brain and use it as input, as in the robotic limb, and also technology to take information from the world and use IT as input, as in the cochlear implant.
You're using current technological limitations (language is a LOT more complicated than sound, for one thing) to argue a philosophical point. There's no insurmountable barrier to us actually creating language-learning implant, or an automatic in-brain translator, at least as far as we (or I, at least) can tell. We've only just started applying technology and software to the brain so we're not there yet, but the barriers are engineering, not philosophical. We've already achieved the principle matter of the problem, inputting information to the brain (sight and sound) and outputting information from the brain (robotic limb).
You can't argue 'technological limitations' If the technology is not even practically possible. The brain isn't something that can be rewritten with a software program, you can't write arguments about reality based on something that isn't real. Yet?
Anyways, the idea of the brain as a computer is rough analogy. Our understanding of the brain is limited, but we know it is deterministic, like a computer and I think consciousness, what makes you aware, is 'output' from the brain, not a source of input. The illusion is that we think consciousness is determining our actions, when it is likely the other way around.
Thus, it makes me think 'digital uploading' of one's mind is impossible. Since your mind is a sort of unique reflection or shadow of the functioning of a brain, it is not reproducible, it's not just information that can be replicated endlessly like digital information can be.
Okay I guess this is a philosophical discussion after all. I think the answer to if a consciousness could be moved from brain to machine is possible is to upload a mind, then replicate it and see if the consciousness has any connection between the copies.
Well, then you could argue you created a new consciousness each time you copied the mind. Like that time you made a baby from inanimate, unconscious material you and your girlfriend exchanged.
This suggests free-will and consciousness are just illusions or by-products of how the brain functions or how it evolved to to function, for humans at least.
This is the part I don't understand. How do you get from A to B? Surely it just means that those words don't mean what a naive philosopher might say they mean? That some time delay needs to exist between some decision-making parts of your mind and your awareness, and that the system that can be described as being free-willed encompasses your consciousness as well as those other decision-making doohickii?
It just sounds to me like you've become enslaved to the words you're using, when the language ought to be serving you. You're getting hung up on these weird, magical definitions you have of the words, and, when faced with a reality they don't describe, just throwing away the entire concept instead of trying to figure out why you intuitively have those ideas in the first place.
I hope I'm making sense. Communication is kinda easy to do wrong, especially when talking about impossible abstract concepts and you're kinda drunk.
Yeah, the implications of your brain completely arriving to a decision before your consciousness is aware of a decision surely says something about the legitimacy of free-will. I think if free-will is this illusion, where you 'think' you made a decision, but the decision was already made several seconds ago, it means consciousness must be an illusion as well, since your own awareness is directly tied into you making decisions, that you are controlling your own actions. I think I already said that, I don't know how to better articulate it. Language,
yeah has it's limitations.