And? Yes, it's huge. Modern computers have at least a gig of RAM. 6x6 embark tiles is going to be around (48*48)*(6*6)*(100). That's 8,294,400 tiles. Whoopee, big! Each one has...Okay, let's say there's two bytes for material type, two bytes for temperature, one byte for natural shape (unmined, partially mined, up-ramp, fortification, etc etc)...Aw hell, let's just say 20 bytes. The dfhack people would know. Constructions, flows, etc are not stored in this table (well maybe flows are). Which gives us...oh about 170 MB of RAM.
So?
If you implement RLE or something like it, you are going totally, utterly cripple the FPS whenever that compressed area needs to be referenced. You'll need a second table to keep track of what's compressed and what isn't. And really, taking up more RAM doesn't make it slower (it's way beyond the point of hardware caching, after all). Having a couple hundred megs lying around in a big array isn't too bad, all things considered.
They're not object in a table, anyway. They're an array of structs (semantics...).
Also, what happens when new stuff needs to be stored for unmined stone, en masse? Like, what if suddenly it started tracking material stress for caveins. Or, let's say...occupancy flags. Ghosts can pass through walls, right? I bet they set the occupancy flags no matter where they are, so that part of the struct needs random-access all the time.
I guess you could just separate out the stone type, pull it out of that data structure, and leave the rest of the struct intact. You'd be pulling out like two bytes per tile--saving 16 megs on a 6x6 embark--and storing it into, well, something else. Worth it? Nah...Especially since you'd need to decompress a chunk of that data just about every single time the screen refreshes.
So...I'm thinking the gain would be unnoticeable, the losses in FPS and extensibility would be large, and it would overall be more trouble than it's worth.