Yeah. Given the small city size, readily available route capacity & traffic data, and (usual-case) relatively sparse intersections, it's the sort of problem a CS student half-way through their first year could solve for homework. DF's graph size is relatively open and relatively dense (paths traverse in the range of hundreds to thousands of intersections); it's actually a much harder case because of that. While SC has on the order of a magnitude or two more agents, it's actually incredibly easy in comparison.
Compare:
DF:
100 agents
5x5 embark = 57600 tiles per layer, with a large number of layers resulting in the actual number being on the order of 200k+; each the equivalent of an intersection
obstacles/path heuristics can change once per frame
SC:
200k agents (let's say it's this; due to aforementioned worker vs population shenanigans, it's much lower, but let's assume they didn't bullshit things)
Assuming a grid like that in the all-residential video, approximately 8x11 intersections, or around 100 for a fully built city, maybe 200 max without intentionally screwing with pathfinding calculations.
obstacles/path heuristics can change once per several seconds (time to get from one intersection to the next)
The other thing which helps with SC pathfinding is that high volume. Pathfinding for high-volume, low density is much easier than low-volume, high density. While in the latter, you can mostly only optimize with tricky pathfinding algorithm modifications, the former can be achieved very easily through a combination of precomputing and batching together agents. A simple analysis of data they already have (after all, they even display the data for the player) can figure out large-scale traffic patterns, adjusting values for road preferences for every agent at once based on traffic concerns; this is based only on the number of intersections, not the number of agents. With around 100-200 intersections, this can be done in negligible time.
Secondly, all traffic travelling from intersection A to intersection B with options to take intersections C, D, or E to get there, can all be processed at once. A value for each road for all of them can be computed just once. Using this on high-volume roads would very quickly figure out a big chunk of necessary traffic data. Beyond this, there's a variety of low-hanging-fruit as far as figuring out the value of roads for the agents travelling them.
Third, on a large-scale simulation like this, stochastic models come into their own. Just not in their current model, apparently.
Basically a stochastic model will take those relative path values calculated before and apply them as the agents decide which intersections to go through, but do so in a random manner. In the current system, it is clearly deterministic; if(value1 > value2), go through value1. By using a stochastic method, it randomly selects which route to take based on the value of the routes. If route1 has a value of 6, and route2 has a value of 4, then 60% of the time they take route1 and 40% of the time they take route2. These values are themselves updated every once in a while with changes to roads (deletion/creation of roads) as well as traffic (high traffic : capacity ratio = lower value).
In essence, most of the traffic should act like water flowing through pipes, rather than direct-route pathfinding.
*alway rambles more