[I wrote this earlier, before being pestered with 'real' work, but had to reboot before I could get it up. With some minor edits, I thought I'd post it anyway, despite being a bit Ninjad by Sir Monko's post, and maybe what appears to have come since then.]
How much space should a cache use up? Is it cell-by-cell movement, or abstraction across unarguable open areas? What search routine might be employed to hunt down what cached paths are affected by changes in the linkages at one particular point in the map? (i.e. find paths that are now useless, so junk that path and immediately recalculate the routing for any agents that are affected... Although the converse insofar as some way that an route that opens up can be recognised as useful to an agent currently going around the previous and longer route would be an almost miraculous addition, without complete repathing every time something changes)
Path caches with dynamic awareness could of course feature multiple branchings in their definitions for whether certain features are opened, and a flag to alert an agent on the path ("if (not yet past <feature>) && (past <lastbranchpoint>) then retrace to <branchpoint> and try <otherbranch>", with various alternate handlers for generating brand-new paths or re-using/reversing various parts and swapping in other alternatives (with or without testing for newly opened 'alternate alternatives').
And how about optimising the cached paths (by memory, if not by overhead calculations) by comparing all newly generated paths with pre-existing ones and checking for shared elements, and as soon as shared partial paths can be identified, and stored as 'sub-solution' paths that can thus potentially shorten future path-searches drastically, or at least identify that the shortest-paths across one particular possible tree of travel are longer than those along the other (or can be considered only solutions for when the bridge is raised/door locked/invading army imposing their presence on the alternate route).
Some of my initial thoughts this morning (I'm not awake yet[1]) were that something LZW-like would be simple to implement, 'streaming' each newly established route through to generate a look-up table and then later routes discovering shared lookups not by going "we already know this much, lets see how much more we know", at the encoding end as we stream the points through (after all, there will be no repetition of points within a single route, unless there's a 'mesh' of possible routes in a dynamics-aware multi-route version that isn't internally optimised), but more about taking into account each past route and identifying shared routing. A particular strength in tunnels (although I think it'd be a mistake to make this an end-game target).
(The main strength of LZW, that the encoder does not need to pass the encoding table to the decoder but that each can dynamically generate it upon sending/receiving the streaming data, is also a moot point given that we aren't wanting to reduce 'transmission overhead'. So we would probably fall back upon standard 'pre-processed' compression techniques, as inspiration, that searches for commonalities throughout the raw information that can be considered frequent or extrapolatable and building our 'sub-solution' library from that. Mixing this in with the actually routefinding. While this, in itself, is not a part of the routefinding algorithm, a routefinder that has knowledge of common paths can avoid some degree of repetitious calculation for novel whole-routes with close matches to a previously understood one, e.g. workshop-to-stockpile carrying, for each sockpile tile and a possibly a grouping of similar workshops nearby, or just for progressive retrieval of stones from a mined-out area. But of course done so as to avoid the local minima situation.)
[1] Now it's past dinnertime, but I still don't think my mental reserves are up to strength, still.
justify all that effort for this engine. (It also couldn't handle reverse-matches.)