Toadyone isnt into multi threading the game either.
If I recall correctly it isn't that he's against multithreading as such, it's that he doesn't know enough about multithread coding to do it without it resulting in massive work on his part better spent other stuff. (?)
And the graphics stuff was done by Baughn :>
Edit: Not that your phrasing necessarily is incompatible with the real picture, but other people could get the idea Toady just didn't like multithreading for no reason
He said in one of the Dwarf Talks that he feels there would be little to gain from the amount of work it would take to multi thread dwarf fortress.
I am disinclined to accept that thinking; DF most certainly is computationally heavy and memory bound, but not all multiprocessing architectures are created equally. Specifically, there are major differences in how memory is handled between processors in
SMP,
CUDA and
NUMA architectures.
Intel and AMD processors use SMP, which can become memory bound over the shared memory bus, however NUMA and CUDA architectures are significantly less prone to this, and many of DF's computations appear to be purely mathematical; in specific, weather, heat, and fluid computations. Making use of recent trends of GPU computing could very well give a tremendous payoff. (and no, saying that there is too much disparity between GPUs in the market for proper targeting is a canard. There are only 2 major types, and both major x86 chip makers are now embedding such silicon on the CPU itself these days. By the time toady gets around to trying something like that, it will have been on-die silicon for years, and be a standard feature. Saying the on die gpus wont give the same performance boost to a memory bound application as a discrete one is at least a proper argument. However, it discounts abusing the on-die GPU in clever ways to serve as both a processor for math functions as well as a rather large number of what would essentially be CPU registers which would be just a handful of nanometers away from the APUs inside the CPU, meaning some of his computations need never touch outside memory anyway, reducing the memory bound nature of the problem. This can be further diminished by the increased cache size available to the program by dividing it over multiple discrete processing cores; or rather, the increased efficiency of cache use. The repetitious nature of DF's hard math computations would lend themselves quite well to heavy cache munching like that.)
Would it be an epic shitton of work? Certainly. Is there any real payoff to putting it off until later? NO. (1)
Is there a serious disadvantage to putting that kind of work off until later? YES. (1)
Will Intel and AMD return to the "Faster silicon!" marketing direction? NO. (2)
Will Toady eventually have to switch to SMP? YES (2)-- If he wants DF to run at faster than 3fps on a gaming rig.
(1) As the complexity of his logic increases, trying to find ways to salvage the work already done when porting it to SMP type architectures becomes increasingly difficult. If it is hard now, it will only be at least geometrically harder later. This means he is not saving himself any effort by avoiding SMP now, at the "Early" side of development (given his ambitious goals). He might think he is, but he isn't.
(2) The whole reason we are getting multiple core CPUs instead of ever faster silicon like we got in the 90s, is because we are reaching the thermal and electrical limits of silicon. Unless some new wonder material actually manages to take real marketshare, we will NOT return to faster silicon as the status quo in processor revision. Since all the contenders for silicon rely on either radically expensive manufacturing processes (monoatomic thin sheets of tin, for instance), or on highly exotic materials (like synthetic diamond or on graphene) That is VERY VERY unlikely to happen within the next 10 to 20 years. Later than that? possible-- but not in the foreseeable future. Further, there is already a trend with some motherboard makers who specialize in SOHO server boards to make hybrid SMP/NUMA boards, which have multiple CPU sockets, intended for multiple core CPUs. Each physical CPU has prefferential access to a chunk of the memory bus in a NUMA like configuration, where each chip then acts as SMP over that chunk. As the demands for better memory access increase due to the new direction chip makers have been forced to take, board makers will likely take these currently server-class ideas and implement them on commodity boards later. Making DF SMP aware now would reap the benefits of the hybrid NUMA architecture relaxing some of the memory bound IO issues when that eventuality comes to pass in the hardware market.
Again, I cant really tell toady his business; I just dont buy that line of reasoning, since I know better.