If there is a simulation tree/stack, then eventually each simulation will be identical to the one that preceded it -- that's the only stable state.
How? By what mechanism must this be true? If we run a simulation in our own universe on hardware there is no requirement that eventually the simulation will turn into a copy of our universe.
What if we model different rules of physics than our real universe, for a start? Any race running a simulation will make alterations to the simulation vs reality, because if they didn't intend to make any alterations, then they could just more easily do a real-world experiment instead of making a simulation.
EDIT: also, consider something like a very large DF dwarfputer capable of running a universe-simulation because you've allocated so much memory to it, then you simulate a complex universe inside the Dwarfputer. This shows that containing universes don't need to be more complex, the complexity of modeling can go up and down as layers rise and fall. The Dwarfputer just runs
very slowly.
This suggests other things you can do: you can simulate a bigger universe by trading off speed for memory. You compress the data, so that more actually fits in. In fact, that's one end-olf-the-universe scenario I've heard. If things get to the point where energy density is too low to keep people alive, but you can run computing devices -
very slowly - but on residual energy such as virtual particles / vacuum energy, then you could create a whole new universe as a simulation. The outer-universe computing would be horribly slow, but the subjective internal rate of time would appear normal. If the outer computing devices keep slowing down for infinity, then this is still sufficient to model infinite more useful living time in the inner simulation.