- Memory is a shared resource, with a single memory bus, so you can only have a single primary memory access going at a time (I haven't kept up to date with development, so it might be possible that the design has improved to keep multiple accesses in progress at the same time but in different stages. If so, it doesn't change the basic principle: it's not "nobody else is accessing this bank bank of memory, so I can do it in parallel", because there isn't a separate memory bus for each bank to each core. You can speed up some things by spreading data out over multiple banks because the banks themselves are slower than the memory bus, so data ordered to be read can flow at the same time as you order data to be read from the next bank). However, modern computers have multiple levels of cashes, where at least the outermost one is shared between cores and at least the innermost one is "private" to a core. Caches are much smaller than the primary memory, in particular the inner ones, so you can't expect to have much of the data cached.
- The only thing the OS can do is to determine which pieces of information to keep in which caches (and prioritize which processes get to access memory first, to some extent). You'd probably need optical computers to have multiple pieces of data using the same bus concurrently (by using different wavelengths, which is done today in optical fibers. It might be possible to use different polarization as well, but I'm not sure about that).
- Moving path finding to a different core will not reduce the CPU consumption of the DF core, but assuming it works, it would allow the DF 100% CPU thread to produce a higher FPS (only very early fortresses are capped by the FPS cap: thereafter the DF can't keep it running at full speed, so any savings would immediately be translated into a higher speed). That would still be quite useful, however.
Even if the investigation doesn't succeed in improving things it might still produce interesting results and insights, and so isn't necessarily useless.