Yeah, but for a game like DF, having predictable pathing is a great boon for things like goblin computers. For most games it probably isn't necessary, but in some games it is.
I don't know what designs you've seen, but pretty much every goblin computer I've seen functions specifically on straight line paths only precisely to avoid issues like this, because if there is only one potential path than they will function identically with and without a nonadmissible heuristic. And if you wanted to say that even 10% of players would take the ability to build goblin computers over the ability to have 2-3x as many creatures on the map at the same time then I would certainly have to disagree with you most strongly. Sure, there may be some sort of "factory line" game or something similar that requires 100% dependable pathfinding (in which case you would use an admissible heuristic), but my main point here is that you, as a developer, need to be aware of the tradeoffs that you are making by deciding that "perfect pathfinding" is a requirement as opposed to just "unnoticeably close to perfect pathfinding" is going to cost you a maximum size factor of 2-3 times for whatever your pathfinding sources are.
In truth there are only three real scenarios that I would see not gaining at least some benefit of a heuristic like that in graph traversal off the top of my head:
1) Having the absolute shortest path actually is an inflexible requirement. For example I'm trying to prove a mathmatical theory, or I really want my missile to have the absolute minimum chance of being shot down. (Note: Honestly the only games I can think of that this falls true for are "production line"-esque games, and most of those just default to preset paths by the player instead of pathfinding to give the player
exact control without any game assumptions at all).
2) Other hard constraints limit me to a scenario where the time gained is not necessary. For example if I know there will never be more than 10 units on the field at once, and it's never ever going to be expanded, then I can save my company money by implementing a simpler algorithm that takes less time to implement.
3) Prototyping/plan-once-execute-often code. If you're planning routes or similar during the startup of a program that you then cache for use later then spending a little bit of extra time now can pay big later as those shortest paths get used again and again. For example if I was planning the various transport routes between a company's factories than I would obviously be willing to pay the extra computation time now to have perfect usable routes for as long as they exist.
People often talk about not spending extra time optimizing until you need it, but pathfinding and other NP-hard problems fall into that category where you should really think at least a little bit about optimization right when you start (unless you have a specific reason not to), because they very,
very quickly can balloon outwards to consume your whole program's processing power later.