I can't tell from the code you give, and with the made up values representing ones that are "several thousands, anyways" adrift, but it sounds like a classic underflow/overflow error.
If not, could it instead be some obscure type-conversion that's (say) taking a value 0..65536 and either 1:1 mapping it to -32768..32767? Either 0 to -32768 all the way to 65536 becoming 32768 (or vice-versa), or interpreting the bits of a signed (2's compliment) value as an unsigned value (or, again, vice-versa[1]).
Either way, perhaps there's some way you can work out what value should be seen, and see if its any 'power of two' value (maybe plus or minus one) away from what it should be? Otherwise, put some debug statements in that check (and display/log, somewhere handy) the possibly erroneous variables' defined types at various stages.
There's some other possibilities, and it's strange that it doesn't always error, so the problem might be in some conditional code[2], but I'd rule the above out first.
[1] That'd only affect the top half of the value range, though, and I can't see raw coordinate values going that high, unless your ambitions or implementation give you such huge array location values...
[2] I got something like this in some "wrap-around" code I did, once. When on one edge (e.g. having a coordinate of zero, in one/more dimension), it was supposed to translate a "-1" offset to the "max" number, and "-2" to "max-1", etc. I doubt you're doing that, either, but you may still have something of a similar ilk, where the error yet-to-be-identified lies mostly dormant, except when it isn't.