I actually completely forgot what my question was...
It was probably something about what would need to fall into place in order for worlds to become significantly larger so that we aren't talking about countries or continents the size of Europe... but rather world spanning games.
Assuming there are plans for that ever to be possible
Well, one factor is increased computing speed and memory size. Possibly 64-bit and/or parallelization to make it run faster.
ToadyOne talked about parallelization, and doesn't belive it'll off much to DF. Though that topic been discussed to death elsewhere. Generally ends with folks saying that because DF wasn't made with parallelization from the start, that it'd be a similar project to just restarting DF from nearly scratch.
Has the difference between multi-threading/concurrency and parallelization been discussed? From what I've seen, we've all talked to death the fact that Dwarf Fortress wasn't designed for
multi-threading... But, I don't know about anyone else, but I didn't know the difference between
parallelization and
concurrency until (recently) I saw this:
http://channel9.msdn.com/Events/CPP/C-PP-Con-2014/Overview-of-Parallel-Programming-in-CPP And if I understand correct from that talk, performing the same calculation (e.g. pathfinding, temperature updates, maybe AI choices) on massive sets of data at once (e.g. every creature on the map, every object and tile on the map) simply isn't suitable for multi-threading (concurrency) regardless of program design, but is very much suited for parallelization -- basically, as long as within the same type of calculation no two objects depend on the results of the other's current calculation of the same type, only the state from the previous tick, and as long as it isn't more if-else branching than actual calculating, then it might be as simple as flagging the loops in question to compile to code that tells the processor, in effect, "See how we're performing the same calculation on all the entries in this list? Feel free to run as many of them at the same time as you can," and then processors have a lot of different non-thready ways to achieve that. (Of course, it's possible that as time goes on compilers will get good at spotting those situations without any help from the programmer and that sort of code will end up happening automatically, but I imagine if, say, it would work except for one thing that makes it not valid/safe/whatever, explicitly asking for it would prompt the compiler to tell you what you need to correct in order for it to happen.) Just my thoughts after seeing that talk, anyway; I'm sure Toady will figure out what will work best at some point in looking at optimizations.