To clarify, I wasn't saying that multithreading doesn't help programs run faster. What I was saying is that just slapping threads into a program and splitting up tasks without some very careful planning about memory access ends up not working. Saying what to do about it is impossible without a complete picture of what algorithms are being used. Getting actual real noticeable improvements out of multithreading is
hard. Those improvements can be amazing, but hold an absolute cap of the number of cores available, so even if Toady did some insane genius programming that carefully respected caching issues, an ~8x speedup on some futuristic machine is the best you can hope for with multithreading alone. If Toady in stead decided to focus on generic improvements to algorithms, a similar speedup might be possible.
Not to mention that traditional multithreading isn't the only impressive new technology asset that might be helpful. Mathematical computing packages that exploit architectural advantages of modern GPUs can have some pretty amazing results, IF you have the right programming problem to solve.
Anyway, I'm just saying that it isn't as cut-and-dry as everyone seems to make it out to be. Sure, running multiple processors can make some great improvements, but actually getting a program to that point is very difficult to do well, and making that kind of time commitment at this point of DF's lifecycle would be, in my opinion, more trouble than it would be worth. Multithreaded programs are good, but hard to code.
Edit:
Edit: If the OP would like me to move this conversation, I understand.
Yeah, we are pretty far off topic here.