Like all things everywhere depend on the distances between things being precisely as they are.
Precisely, give-or-take the uncertainty principle? I'm happy (given such a 'software universe' conceit, for anyone who wants to go down that route) to consider the possibility of rounding errors to be the basis of all kinds of quantum weirdness, at a level lower than we can reliably test because of our limitations.
My old standard in this thought-experiment is imagining a Conway's Game Of Life creature. A highly complex (and large) pattern upon the square grid that (by its own standard) is a thinking, sentient being. There's nothing barring this from happening (save that it'd be very large), whereby internal ideas equivalent to "cogito ergo sum" arise within such a pattern. Although don't ask me to design it (it would be an exercise in practical theosophy, perhaps).
Anyway, whilst a suitably-constructed/developed/evolved GOL-creature would interact with its immediate environment by its 'edge effector' cells reacting to the touch of nearby patterns it comes in 'contact' with, there would be no way that it could detect the individual living/dying/flipping unit cells that
we would know about (and could observe). There's even good reason to believe that neither would they be able to think "hey, I'm on a strict grid... the
real world isn't on a grid".
(Even if they could, they'd misunderstand it. 'Gliders', perhaps the basis of force/information transference in Grid Physics, move 'quicker' diagonally than orthogonally (and not as quickly as information could
technically be conveyed, but not a "wave soup", kind of way) and so it would look like distance in the diagonals is 'squashed'. And it'd look entirely normal.)
And who's to say that our own POV isn't similarly limited by simulation concepts that don't apply to the meta-programmers that set up our own universe? (Who, by the way, probably would understand
our own existence as little as we could understand the existence of this large GOL pattern. Oh, we (they) could poke away at the GOL-grid ('real world simulation') and observe the interactions between cells (perhaps, for us, it'd be quanta), but... understanding the entire pattern that forms a living 'person', or even just the bit dedicated to their 'consciousness'? I think that'll difficult to the extent of impracticality. It'd just look like a pretty picture, I think. As, indeed, GOL-patterns actually do, when they 'live' and develop.)
Also, how about fields? Like, electromagnetic fields, that must be defined at every point of time-space and if they're quantized at any point, the math starts to break down rapidly because of "numerical friction" messing things up?
That's a continuous function, across all space (and time) dimensions. It needn't be resolved/queried below the 'grid-level', but it can be integrated at any point to derive field-force upon (quantised) point-masses, etc., whilst ensuring that no element in the point-particle array (a record of position, inertia, etc, these values quantised to grid) get so close as to create a greater-than-Planck-Mass entity, or deal with such effects by maintaining the quantum foam 'fudge' to absorb such impossibilities.
Don't even start on how to model frikking relativity and all the effects which mean that you not only have to keep the memory of all the current things, but also of ALL things in the past! That's one louse simulation, with constantly increasing memory size taken.
Firstly, you're mistaking this as a program that's operating on Real World hardware. You can create a form of computer within DF/Minecraft/etc, but it'll never be of the power of the computer that the simulation is being run within. (Nor, strictly, the speed, although you could 'simulate' a high-speed processor by 'telling' the simulation that one simulation-tick is a femto-second, even if it takes one second per tick in 'our' world... but there'd necessarily be nowhere near as much memory available to the simulated computer as the computer that is simulating. Now extrapolate backwards.
Secondly, relativity might not exist in the simulator's universe. That might be one of the fudges their program requires. It might be difficult to understand, ourselves, and we may misinterpret it as "no single 'true' frame of reference", but in the format that the master-simulator computer maintains our universe's data it may well be quite simple. Perhaps the 'speed of light' is something to prevent having to perform infinite-upon-infinite interactions between quanta at every stage of the simulation. Perhaps time itself does not exist in the external world (or certainly not as we know it), but there's likely to be less limitations towards calculation and computing power at that level because necessarily there'd be a greater limitation of calculations within our own... What we see as a universal limits is just an artefact of the simulation parameters.
(And that'd apply to the further relationship of the
next level out, if that's even a thing...)
That aside, all I know is that I, personally, am probably a figment of someone else's imagination. Is it you? I'd love to know who it is who is so warped and deranged...
Stop with that solipsism, the universe is not nearly as inconsistent as any imagination that I know of.
Also, "warped" and "deranged"? Such words cannot possibly apply to our universe.
Universe? In this instance I'm not talking about nested universal simulations, I'm just talking about me. The universe might be real, but I may be a shared delusion concocted by someone else.
Or, to put another spin on it, what if I'm just a clever chatbot, designed to pass the Turing Test by conversely claiming that I'm not a real person with 'faulty' logic and retro-solipsism (or 'aggripsism')? What if I'm so good at my impersonation that I've not been formally subjected to such a test. Or recognised as passing, at least.
But that'd be ridiculous, obviously. You'd have to conclude that I might have been trying to make a separate point altogether, instead.