I doubt any threading, hyper or otherwise, is likely in the near term. Retrofitting complex single-threaded code to work with multiple threads is VERY difficult.
No it's not. A for loop doing independant operations is not difficult to port on multiple threads. Such as, say, the pathfinding? Run DF with a sampling profiler: the pathfinding is eating most of the ressources.
Sure, it's not optimal, but it's rather quick to do, and could save a lot of time.
Then you should mention that in the Pathfinding thread, as you may have come up with a breakthrough. We've had dozens, if not hundreds of multithreading threads that, to be honest, led nowhere in particular because most forumites came to agreement that it was ridiculously hard for not enough gain, and Toady had more important stuff to do.
And I suggest you elaborate for added credibility.
If you have a compiler that uses, say, openmp, it's just a matter of adding a preprocessor instruction before the loop.
#pragma omp parallel for default(shared) private(i, j) or somesuch
That would, however, only work if the pathfinding calls are independent from each other. I don't know how his code works, so yeah.
But that was just an example: any for loop with independent operations can be optimised that way.
I had some benchmarks with matrix operations (for work): it doesn't go twice as fast, far from that (edit: nevermind, it does), but there is still a notable increase in speed (and that's with 2 cores). The increase gets better as the loop gets bigger. For very small loops, multithreading actually wastes time.
Graph is for 10*x X 100 matrix transpositions. It's not very noticeable.
If you take the root of each element of a matrix however, multithreading it makes it run almost twice as fast.
Multiplication is also almost twice as fast for big matrixes (can't find the graph).
Multithreading addition actually slows down the program.
So if he uses euclidian distance, well, that part could go far faster.