Being able to simulate consciousness however would completely overturn what we think we know about concepts such as "self" and "identity".
e.g. if you're running a simulation on Machine#1, which is conscious, and stop the simulation at a point "X", then copy the data to Machine#2, then continue running it, we'd say the consciousness "carried over" from the old hardware to the new hardware, right? We basically froze the state, transferred the state to a new machine and kept running it, so the conscious entity should seamlessly carry over.
However, what if you ran both machines with identical inputs fro the start, up to the same point "X", then stopped Machine#1? From Machine#1's perspective, that's no different to the first scenario: machine#1 was conscious, then it stopped, yet machine#2 contains the state required for the consciousness to keep running, so machine#1's consciousness should just transfer over to the second machine, right? even though there's already a consciousness "in" machine#2. e.g. we have a merging of identical consciousnesses.
e.g. normally, we'd think that turning the hardware of one machine off would "wink out" the consciousness, but if there was another machine to "carry on the state" then we're positing that the consciousness from the first machine flips over to the second machine (e.g. we're saying transference is possible). But what if that second machine was already "full" with an existing consciousness, but happened to have the identical state at the right time, exactly as if the data had been copied over? e.g. in this example, we have two machines running two separate yet identical consciousnesses, then we turn one off. Did we just murder one, or did it transfer/merge with the other?
Then we can ask what if we run the two machines out of sync. If we run Machine#1 one second ahead of identical-inputs machine#2, then turn machine#1 off, machine#2's state will catch up with machine#1's in one second allowing machine#1's consciousness to "continue" in machine#2. Then we can ask well, what if we stop the other machine, e.g. the one that's 1-second behind. Then, the state also carries on on the other machine, but one second in the past instead of one second in the future. We can then ask, is conscious time-invariant e.g. the sequence of states doesn't actually have to be chronological, as long as the pattern exists, no matter how far spread it is in time, or what order the states appear in, according to "our" time.
But then, what happens when you turn machine#1 back on? e.g. we've already said machine#1's conscious "transferred over" to the new hardware, so now is that the same consciousness or not? e.g. the ability to put a consciousness into hardware and copy it over to new machines, splitting and merging identical consciousnesses at will, and having them do that in time-invariant fashions, all of that is going to be a serious challenge not just to ethics, but to our entire concept of what "self" even means. e.g. we might not be worrying about AI human rights, but dealing with true existential terror if we could actually do this, and it's going to be much worse than worrying about how much life as a brain-jar would suck.
e.g. some people ask why high-tech space societies don't visit us. perhaps soon after our current level of development we realize that the "self" doesn't even exist and is an illusion, and we all go insane.
e.g. imagine a situation in which you have multiple conscious AI's, which are deterministic, and you rig their inputs so that gradually they approach identical states. when you do so, you can merely turn all the AIs off, copy the (single) state to a new machine, and logically, that one is the continuation of all the consciousnesses, disproving that individual consciousnesses actually exists. e.g. if you want efficient storage of all uploaded humans, this is the way to do it. Compact them down to one personhood.