For the thought experiment, let's consider a simple 2d tilebased world with multiple Z levels, like, say, tibia or DF.
The game keeps it's local slice of the world updated with data from clients it has determined are simulating nearby sections of the world.
We will say, for sake of argument, that the currently simulated slice is 1000x1000 tiles in size. That is the world slice size for this thought experiment.
Each client has the local simulated slice, and 8 "tangent" slices. In addition to synchronizing the local slice with other clients that are simulating the same slice, it synchs data with clients that are simulating adjacent slices. To keep this simple, let's pretend we are playing a really tiny world, with only 9 slices.
Eg, client #1 has an availability map like this, since it is hosting slice #5.
(1)(2)(3)
(4)[5](6)
(7)(
(9)
It has inactive, but synchronized data for the other 8 slices, but is only really concerned with slice #5.
Client #2 is simulating slice #2, and looks like this
(7)(
(9)
(1)[2](3)
(4)(5)(6)
Clients 1 and 2 are not in the same slice. At this point, they don't have any others to compare with, but they still consistency check each other! Client 2 asks client 1 if the change it wants to make to it's local map is reasonable or not, since client 1 has a cached copy of that slice. (In case the player driving client 1 paths there.) Client 1 looks at it's copy of slice #2, and uses some game rules evaluations to see if the change is reasonable or not, given the time since the last synch attempt. Client 2 trusts client 1 about the consistency of slice #2, because client 1 is not actively modifying the slice but has it adjacent, and is the only other client in the swarm. (It would prefer to ask another client that is actively simulating the same slice for consistency.) It is a failsafe against the player of client 2 using a map editor and memory poking app to alter the local slice in ways that aren't allowable by the rules. Client 1 returns either a "yes, that looks OK", or a "NO WAY! Do a quorum of other peers, get concensus, and forcibly redownload the slice!" Answer.
Now, let's throw in client #3. It is simulating slice #9.
(5)(6)(4)
(
[9](7)
(2)(3)(1)
Client #2 now asks clients 1 and 3, since both have the same level of trust, if the change it wants to make is OK. Clients 1 and 3 first compare hashes-- yes, those inactive slices have the same data. Then they do the sanity check. They agree the change is OK. Client 2 allows the change, and continues. Client 2 tells clients 1 and 3 the new data, and the new hash.
If clients 1 and 3 disagree about the data, they check their timestamps to see who synched last. The one with the stalest data checks the one with the next stalest data, to see if the change between those is acceptable. If it is, it synchs with the next stalest data from its list of peers, and they check again. This is like a bubble sort, and checks each level of staleness for allowability, and synchs up. If it finds a consistency error, it flags, and does an emergency quorum. If more than 50% of connected peers say the change is OK, it accepts the change, otherwise, the swarm decides that the newer versions are in error, and redacts those changes, until a verifiably consistent block is shared between all peers. This usually will only happen with peers entering the the 'comittee' that have very old data. (Say, they were offline for a few days.) It shouldn't go more than 2 or 3 passes deep, under normal conditions. This mechanism allows stale peers to act as watchdogs against concerted efforts to break the game, using majority rule. (Say, the cheater jumps to a cell that has no nearby peers with 4 special clients he wrote himself that lie like politicians. They overrule the consistency check of the other, less trusted peers, and institute a radical change. However, the only reason why nobody is in that area of the game world is because it is 1:15 in the morning. It is actually quite busy at 8 in the morning, 7 hours later, and this is a griefing attack. When the next set of clients come on line, they find that nearby peers tell them that they are inconsistent, so they begin arbitration. They catch the radical consistency error, they radically outnumber the griefing peers, and the griefer gets forcibly redacted.)
Peers that are simulating the same world slice have a similar mechanism to retain consistency and synchronicity with each other. Since these need to be as close to realtime as possible, they quickly identify which peers they have the fastest pingtimes with, and form as efficient a mesh as they can. Each peer wishing to update the slice asks its fastest peers if that is OK or not, with a limit of 10 peers. Majority rules, with a second consistency check when each of those clients tries to ask their fastest neighbors if the change is OK or not, and so on. Changes get checked redundantly against the rules repeatedly as they "ripple" through the mesh. The granularity of the window is "lowest end to end latency *2, plus 50ms" there is a maximum of 50 clients per slice. Once 50 peers enter the mesh, requests to simulate that slice are rejected until a peer disconnects, or moves to simulating a different slice. This prevents the overhead from crippling the game as it struggles under the beaurocracy.
The order of precidence for valid data goes like this:
Actively simulating the cell, with >50% majority approval of other simulating clients.
Adjacent simulator, with >50% majority approval of other adjacent simulators
Old, formerly adjacent simulator with stale data, working out the change's reasonability with other stale, formerly simulating clients. >50% approval.
And, in the case where none of those are available, (client is the ONLY simulator!) It accepts its own data as gospel.
To do this efficiently, many of the arbitration routines will have to be in different threads, and make full use of a multicore system, to keep the main thread from being bogged down by adjacent clients doing consistency checks against a client's inactive slices.
Much like with bit torrent, the client's bandwidth will be out of this world, as downloads of fresh slices happen to updating clients, and consistency check messages swish around. This is another reason for the peer limit of the quorum nexus of a proposed update on an active slice, and for the limit to the number of active peers inside an active slice. The bandwidth requirements to maintain consistency would increase exponentially if uncapped, with each additional active client!
For this kind of thought experiment, the world being simulated is an interactive sandbox, like DF.