Nope! GPUs are very parallel beasts, and for something like Dwarf Fortress, the number of tiles to be displayed doesn't actually matter much - it's basically the same amount of work no matter if the playfield is 80x25 tiles or 100x50 tiles, because it doesn't actually know about tiles at all - it just calculates pixels; it's the data it gets sent which effectively makes it tile-based.
The number of different tiles does matter, however, because of the limits the GPU has on the size of a single texture (more tiles means more space needed for all of them, and that puts you that much closer to the texture size limit).
No, I mean, a larger viewing area has a larger likelihood of using different tiles if you're talking about the CPU having to send textures every frame for some reason. TWBT rendering multiple floors functionally multiplies the likelihood of different tiles being visible in a single screen. (And for that matter, loads a second tileset for text, as well, and usually has creature graphics enabled,
all of which expands the amount of tiles the GPU is being sent, and none of which is really enough of a slowdown that most players suddenly drop TWBT for performance reasons, alone.)
If you're saying that it all needs to be on one tileset image for some reason, rather than multiple images, (I would expect them to be diced CPU-side, but whatever,) then all that requires is having code to allow more than 16 rows in the image, such that users can make an extensible tileset.
And anyway, the total cache size of even an outdated GPU should be far more than enough to keep them all onboard the GPU. A 10-year-old GPU has 512 MB of RAM, and as Vattic said, you're talking about at most 100 or so MB at the furthest possible extreme. More modern ones have gigs of onboard memory.
And again, even beyond all of this, we're still talking about a linear increase in the processing complexity of a trivial portion of CPU time. I still can't see how any of this adds up to anything more than negligible complexity.