snip
But your idea about AI murder contradicts your idea about transferance. I already outlined some of the problems with that in my own post.
Back to the "frozen you" idea. Say you were frozen, then scanned. then thawed. You are still meat-you. Then, the clone is turned on. There's no logic by which you can say the essence of "you-ness" would suddenly jump to the computer, right? Meat you is the direct continuation of original you, and you do not experience being the computer. There'
something in the computer who thinks they are you, but they are definitely separate from original you.
So, what if we modify the order a bit? What if we push "play" on the computer
before we thaw you out? Basically,
nothing would be different and there's no logic that would make the situation objectively different to thawing you out before we turn the computer on. So saying the "you" in the computer is a direct continuation of your essential "you-ness" because we hit "play" a little early is clearly bullshit. Consciousness is not a material "stuff" that has to transfer to another medium, and that transfer is not constrained by normal causation (again, because consciousness is not a material thing).
Maybe something jumps, but I doubt that the "you" you subjectively jumps, as in a individual self. Because then, who is the copy left behind?
So back to the AI-murder idea you had. My thought experiment already addressed that.
Say you run two copies of the AI, but with identical "inputs" so that they run in lockstep. You say turning one off would be AI-murder? e.g. that one "died"? Why wouldn't the consciousness from one jump to the other? After all, the end-state of the one that was turned off was the same as the current state of the one that was left running. If we go by the transference idea, it's clear that no AI would subjectively die as long as there was an identical simulation running elsewhere which carried on that exact mental state. In fact, it shouldn't matter if the 2nd simulation was run
after the first one. e.g. if you run the full simulation first, then later restart a copy of it, you have two running at once (they're identical because we assumed identical stimulus, but they're out of phase). Then, turning the second on off should sync that to the matching state of the other AI, even though it was running at an earlier time frame.
So this actually suggests that two "separate" consciousnesses which have the exact same state are actually just representations of the
same consciousness, and they don't actually "fork" until the state itself diverges. I actually imagine that there's a bunch of quantum stuff going on with consciousness, and actually trying to copy the you-ness into a computer will break some of the rules of quantum physics.