I'm assuming this is in relation to my post, if not just ignore me
Pathing really isn't the huge slow down that everyone thinks it. The proof is very simple, stop everything in your fort and set all your dwarves to do refuse hauling. When the entire job list is clear note the fps, then set a number of whatever to be dumped. The number should be closely equal to your current idlers. After just a little bit you should notice a small freeze, that is all of the pathing requests. Not exactly optimal, but continue to watch the fps as the dwarves go to work. It should return to the same number it was when they were all idle, there are no more pathing calculations done unless a traffic jam ensues.
It's not the pathing call itself, but the associated loading of each data member into processor memory to check if you actually need to call pathing I'm talking about bypassing. I'm assuming each frame every creature from all your dwarves, to any visitors (hostile or no), and all critters on your map are loaded up, some data checked for if they need to path, move, etc and then unloaded. If something is caged it reasonably shouldn't need to even be loaded up since it's not able to do anything and loading that takes time.
A good test case for this is to gen a world, copy the save, and load up an embark with just seven dwarves then do the same in the copy at the exact same location (so no terrain difference to affect the numbers) with just seven dwarves and thirty dogs. The second is slower on my system by a measurable margin, -10 fps on my system, just because it takes extra time per frame to check each of those creatures and tossing them immediately into a cage only gives me back 3 fps.
Playing with the gfps tells me something similar is done by the display code too, since dropping the gfps to 1 gives me huge fps increases at the cost of easy visual tracking of what's going on. I keep gfps at 13 explicitly to have a reasonably smooth visual while still having better fps than the default gfps of 50 allows me.
As ZCM noted, using a prioritized queue to determine which objects need updating is massively faster then checking all items.
I fully agree, but that is one of many fairly minor to implement tweaks that would help as were many of the various ideas thrown around in this thread (devek's 48x48 instead of 16x16 as an example).
babble
My rule for when to optomize is always 2 levels. If a class has 2 child classes, or any function has 2 levels of calls above it; then it should be considered a lower level item and should be optomized. I will bend that while writing the particulars, but once a specific algorithm is determined and that lower level item is clearly defined I make sure it is pretty close to optimal before moving on.
For something the size of DF waiting until the other 69% of the features are written to do optomizations would be idiotic. It would also make the game nearly unplayable as well. Consider if you will, that in 8 years 31% of the game has been written. Toady is writing it faster then when he started, but it will likely be at least another 5 years until 100%. If some good easy optomizations are found, tried, and learned now; then Toady is only undoing the mistakes in the first 31% of the game, instead of having to fix the same mistake 3 times as often.
My experience says it is always easier and less expensive to do something right the first time, than it is to do it, take it apart, and do it again. This is one of the few things that seems to be true for all fields of human endeavor and study.
That right there is a big issue, tying closely into G-Flex' immediately preceding post. It was made obvious by the September 2008 to April 2010 dev cycle from 0.28.* to 0.31.* with the stuff that was rewritten and now works in a very different way (largely combat related) that Toady wasn't really working from a perspective of working out how something should work then implementing that from the start so much as "kludge now and fix it when I have to". The later, while a valid method, isn't a good way to work on a project of this game's massive scope because you then have to later change all sorts of very early code that may or may not any longer have valid comments. That said, that is a methodology common to many until they get to doing a really big, complex project that ultimately shows them it's very small working limits. The real elephant here, much as it may upset the end user to contemplate, is that the longer the project goes without optimization, the more and more newly implemented systems rely on the old code and the harder optimization is without breaking several other things that rely on the old implementation's workings later.