In other words, why is it implausible that an AI could run a sufficiently advanced simulation that the simulation thinks that it is the person?
Because it wouldn't know how. This should give you an idea of the problems we're currently banging our collective heads against. Direct to the source. It won't know anything we can't tell it, and the idea that it could somehow divine the answers to these problems out of sheer processing power brings us into omniscient AI Jesus territory.
We do have weird progress towards this end goal though:
http://browser.openworm.org/#nav=4.04,-0.03,2.8In other words, why is it implausible that an AI could run a sufficiently advanced simulation that the simulation thinks that it is the person?
Assuming the chaotic model here, it's really rather quite simple.
The AI would have to simulate the entire universe to exact detail. Sure, you might argue that that could be possible on sufficiently potent software design and hardware architecture.
However, such an AI would necessarily need to simulate itself.
It would need to simulate itself simulating itself.
And so on.
As to why it would need to simulate the entire universe, any chaotic model requires an exact simulation to get exact results; the system is deterministic. However, any error increases exponentially over time, so the simulation must be exact or else risk serious errors coming up. No one thing can be neglected from the simulation, due to the nature of mathematical chaos.
Ah, you're looking at a different problem, intelligence is messy anyways, but the important thing is that there is no simple way for me to prove to
you that I am sitting next to real!Ispil and we're watching the outputs on the machine running sim!Ispil, i.e. the you I am speaking with on this forum.
Similarly there are numerous things you can do which suggest to me that the hypothesis that you are just a chatroutine is falsified, I can't disprove that you actually think you exist without getting into solipsistic nonsense.
Now, taking the assumption that you have internally consistent mental states, and that you observe yourself to be embedded within a universe, what are the minimum requirements necessary to achieve that?
You can't go out and touch a star, so we only need to make them behave plausibly if observed with certain equipment, you can't actually directly interact with anything more than a few feet away so we need to apply a certain level of detail within that volume, thankfully we can fudge most of it because you lack microscale senses. We need to account for sound and light reflection, which is a bit more complex, but far from impossible, smell and taste could be tricky but they are usually running at a sub-aware level so we only need to call those up when prompted. Naturally the framework for your meatpuppet needs to send back certain data to emulate biomechanical feedback, but that isn't too onerous, and thankfully you are very unlikely to start trying to dig around inside your own chest cavity to see what is happening... though we should probably put in place some sort of placeholder we can drop the relevant models on just in case.
We could probably use backdrops and scenery from live footage to add another layer of versimilitude, but most of the extra processing power would go towards making sure the (probably claustrophobic sounding) box bounded by your limbs at full extension behaves as
you expect it should, though the actual self!sim itself will still be eating up a decent chunk of resources as it trundles around, but we can make use of things like a limited attention span and fatigue to trim a good amount of the overhead down outside of extended bouts of deep existential pondering.
Now, I'm not saying you should open your abdominal cavity and see if there are any graphical errors as chunks of it are rendered, but can you think of a way to prove you aren't in a glass case of
emotionsimulation?
It doesn't need to be exact and complete to produce something which would think it was you or I. Yes, after initializing it there would be divergences as the decision factors for both take them down different routes through their respective phase spaces...
...but hey, just in case you were comfortable with the idea of sim!you existing in some hypothetical, don't forget that it would probably be more productive if the likely region of your decision phase space were mapped out intensively, so the question would then become: how many iterations of sim!you does it take to map out the most likely responses for real!you to any given stimuli?
I'm not saying that I would run endless sims of you and then shut them down after selecting the most useful data from the runs, it sounds horrific to me to do that to someone with a similar level of intelligence and attachment to their own existence, but I'm not a godlike AI without a reason to be attached to the particular mental state of specific individuals, am I?
And yes, this is all assuming that the chaotic decision function is a reasonable model. If it isn't, then either the decision model is stochastic, or both deterministic and polynomial-scaling (or less) in perturbation scaling for any error in the input.
In other words, humans are either chaotic, random, or predictable over the entirety of the phase space (in this case, the Oxford comma is in use; the "entirety of the phase space" only applies to predictable). There of course exists plenty of given inputs for particular decision functions with particular priors that particular decisions are predictable; those are the periodic orbits of the decision function.
You omit that it could be chaotic and deterministic with perfect initial information. Figuring out what will happen when you start a chaotic system is totally possible if you know how you started it.
These seem like another way of describing the different subsets of possible actions, random actions covering the broadest region if given enough time to evolve, chaotic actions having a likely portion of the phase space, and deterministic actions providing anchor points--we know you and I will go eat, drink, breathe, sleep, and so forth though we can choose to activate or delay these processes--which staple parts of the other two sets together. There are no random actions which will result in someone living and breathing without any environmental protection in the upper atmosphere of jupiter tomorrow, there are no chaotic trajectories that wind up with you avoiding to eat
and avoiding death in the near future.
I may not know the initial conditions well enough to make these predictions about a chaotic decision function, you may not, but can you confidently state that it is impossible to know them?