I read your post again and understand it more. The problem with your solution is it'll still cause crashes, just now it's because you're accessing a null pointer instead of a deallocated pointer.
Some variation on "if (referer.pointer == nul) then { cleanup() && return}" would be the optional way of handling if you nullify. (Also suggested changing the pointer to a global 'safe object' that itself safely absorbs all potentially failing attempts. Perhaps you also arrange to send a SIG back to the callers regiatered in referers[], if you're in a multithreading situation where that's important.)
The reason is because when the object representing the trap component is destroyed, other functions are still using it.
If the garbage collection won't go to the memory model, the memory model must go to the garbage collection! (Go to == check, naturally, but I've never let a half-decent paraphrase get in the way of accuracy.
) I'd seriously write it all from the ground up in assembler if the compiler doesn't insert basic sanity checks in
realloc and
free operations. It sounded like the problem was more random pointers spread all around the code (not in
current use, in the one-or-maybe-two-threaded standard DF operation, though potentially to be checked any arbitrarily small number of op-cycles later as faux-multitasking switches onto the temperature-check code or whatever. So have everything liable to be temperature checked know from where (likely the master 3d grid array, as with other global checks) such requests are to come from, and at the moment of wear-check loss (perhaps from an all_weapons array, fed to the wearing down function) make sure all other references are neutered in a future-check-friendly way.
For the parallel rendering process (does that even
need to pointer onto individual trap components?), fire off a signal to alert it to a change (it won't be the first thing it needs to be signalled about!).
The solution would be to hold off destroying the object until all references to it are gone, which is what shared_ptr does.
shared_ptr is a wrapper around normal pointers and is designed to behave (externally) like them. shared_ptr has an internal (static-scope) map from the raw pointers to the number of references to them -- when a shared_ptr is created from a pointer, the correspondent number is increased by one, and when a shared_ptr is destroyed, the number is decreased by one. When the number of references reach zero the pointer is deallocated.
I still say that a backpointer array (itself an intrinsic count, but with added functionality to justify the memory cost) would work best. Slightly heavier on memory, but so long as you know it is there, and use it meticulously, it solves so many problems in an exactly user-defined way.
But this is just how I'd do it. I'm a low-level type of coder, and at the same time I have no idea of the entire and fully pre-knotted complexity already invested in DF. It could be that reimplementing things my way would be effectively a whole Development Arc in itself, more comparable to the mythical full multithreading implementation than even the 64bit advances... Still, throwing it out there. You can probably freely criticise some stupid coding error/misstep/assumption of my own when I eventually identifiably unleash my own personal project(s) upon the world. I have several of those in the offing, unencumbered by the team-collaboration and peer review reports that my more professional outputs have gone through...