How in the hell do you plan on starting or dispatching a thread by switching to it?
Scheduling was included in that benchmark actually as the test was comparing a few Linux kernel optimizations and Windows was in there just for comparisons sake, it was not a strict hardware test. Another benchmark I've seen shows waits, the slowest method, eating up only ~1.6 microseconds per call (200,000,000 WaitOnes took ~325 seconds). And that was back on a 2004 processor, it should be down into nanosecond territory these days. Sleeping was about 3x faster...
Kernel mode transitions are slow, 1.6 microseconds is a big chunk of time in processor land. But their cost is a drop in the bucket when compared to real work costs. Each one is only using up 0.00016% of a cores time each second. How many of these threads do you realistically think we're going to be popping off over the course of a second? You severely overestimate thread scheduling costs vs. work costs, and offer no evidence to back up your claim.
I understand to some degree why you feel the way you do. At some point someone told you or you read that context switches, kernel mode transitions, etc. were bad and slow and you took that to heart. It's true in a lot of situations. Networking for example, when you've got a set of sockets receiving a packet from a user every 2ms and 1000 users then you're looking at 500 * 1000 * 0.00016% = 80% of your CPU being used up just in thread costs, clearly unacceptable in this case the overhead is way too costly and a better way should be found.
But when you're talking about a beyond worst case scenario of 200 paths needed EVERY frame at 50 FPS and EVERY one stupidly in it's own thread it's still a far more reasonable 200 * 50 * 0.00016% = 1.6% CPU utilization. But if you can write an algorithm that will calc 10,000 paths faster than that then I withdraw my argument.
What? Lol. A strawman lock is a joke.
Heh
It is indeed a joke... one that went way over your head.
Once your thread enters a wait condition(on windows/linux 1:1 implementations) it requires the operating system to make it work again, fact.
No one is arguing against this. The disagreement is about whether the cost of doing that is greater or lesser than the cost of doing a unit (or multiple units) of pathfinding work.
Bound threads do not exist in windows or linux
SetThreadAffinityMask called and would like to have a word...
Now, as to whether this would actually be a good case for that is highly debatable. Depends on how many CPUs you've got, level of background processes, how often pathfinding is going to run, cache amounts vs. required, etc. I suspect that it's probably a slight performance boost on some machines and a drag on others and not worth doing overall.
I'm second tier. I wish I had been able to afford to come to America and go to Berkeley or something.. oh well. Maybe someday...
Being a decent programmer has little to do with higher education. The classes themselves often border on useless and certainly don't contain anything you couldn't learn on your own. The one truly beneficial thing that it does provide you though is an environment with like minded people hard at work and it is through interactions with them that the real learning can happen. But that scenario is far from exclusive to big name American universities.