Let me try to break it down as a step by step process:
1. In the physical universe, you have a turing machine which allows you to run arbitrary programs.
2. You have a physics simulation program which accepts a file containing a start state as input and produces a later state as output. This physics simulator is 100% accurate with regards to the physics which govern the turing machine from (1).
3. You have a device that scans both itself and the turing machine from (1) and creates a data file formatted to function as the start state input for the physics simulator from (2) containing all of the required information about the status of the turing machine and the scanning device.
4. You create a program which starts and stops programs as described in the later steps.
5. The program from step (4) loads the program from (2) onto the turing machine from (1) and then tells the device from (3) to produce a start state file containing the device from (3), the turing machine from (1) and the programs from (2) and (4).
6. The program from step (4) inputs the start state file from step (5) into the program from step (2), causing the program from (2) to simulate every step in this process beyond step 5, including this one.
A this point, the turing machine from (1) gets stuck in an infinite loop of step 6, consuming all processing power available to it and never proceeding beyond the first stage of step 6.
Please read something about algorithm, Turing machine, and the meaning of "process" a procedure, before you mash them up and use them. And your first step 1~3 about scanning "everything and the UTM" is already implying infinite on its own. Its not a infinite system at all, you don't need everything in the universe to simulate a turning machine, just finite resource, like your own computer. The problem is within the infinite memory(working space) problems.
Graphical representation :
A UTM simulate from blue to red, and that's it, repeat. The rest of the memory is unused (infinite or not), until next level of a simulated UTM is needed. Other wise its just doing the blue staff. Every next step of a complete simulation will add additional running space, and time. But the clock rate isn't change. The problem is only occurred when this is layered infinite times. Otherwise it can always running, just more processes and slow to show the next level.
Imagine you on a plant(Blue+red) with a super powerful computer(red), which can process any amount of data in super fast speed. And one day you want to simulate a world, an you want them as detail as possible, so you buy a plant sized memory, which each atom can stored the information just as much as an atom contain. And you run this simulation and its so powerful, that it can process every bit of the information stored in that planet size memory in no time, and then on that simulated planted, a simulate you trying the same thing, and buy a planet sized memory, which in term our universe you notice what he is doing and in order to prevent program overflow, you buy a new planet sized memory, just to hold his planet sized memory in his simulated world. And the next level of simulation started, but your processor has to simulated other things in his world other than his computer, (this is where is gets slower), and his computer started to run simulations at a much slower rate in our world. But in his world, he didn't notice, since in his point of view, all the processes happens simultaneously in his world. (Like 1/100 of the CPU time is used to simulated his computer, so his computer can only run at the speed 100 times slower in our standard, but not in his standard), and these layers of simulation keeps going on, and you keeps buying planet sized memory, and mini-you in their world are doing the same thing, excapt the last level guy who haven't done the same thing.