Wormslayer asked me this in a PM, but I'll reply here:
When displaying tiles, they are currently automatically resized to the on-screen tile size using the high-quality Lanczos algorithm; this makes them look far better than OpenGL's linear interpolation, and saves texture memory. Due to technical constraints, all tiles in one texture array have to be the same size in-memory.
This does not easily allow multi-tile creatures, however. So I've come up with an alternative, extending the tileset descriptor syntax a little (and yes, you'll have to write it this way now):
An example tileset title
# Lines starting with # are ignored
array base 1.0
base.png 16 16
# As are empty lines
array large 2.0
large.png 2 4
more_large.png 3 7
array variant 8.0 mipmap
barbarbaz.png 8 8
The basic idea is that you get to specify multiple texture arrays, and explicitly specify the a tile size for each one. "texture foo 1.0" creates a texture array, each texture(tile) in which is the same size as a standard on-screen tile; 2.0 makes them twice as large in each dimension, and if the size you'll render them at varies you can additionally specify "mipmap" to, well, mip-map the tile. Doing so doubles its size, mind you.
Inside your shaders, the array name names the sampler; for "array base 1.0", you'll also need a "uniform sampler2DArray base;" in your fragment (and possibly vertex, but probably not) shader.
You can still specify as many PNGs per texture array as you want; those are just concatenated, same as before.
One caveat: OpenGL only guarantees you access to two textures, including arrays, and one is already taken by the tile array. I will of course test for this, and abort if necessary. That said, my teeny little 8600M allows 32 of them.
To be clear: The first tile in base.png and large.png are
both numbered 0, but exist in different texture arrays.