They're arguing over pinch and zoom, JFC.
On a non-vectorized image, the "lossless" way of zooming is to scale each pixel up to some NxN square that has the same color value as the original pixel. This essentially gives you larger pixels, and it doesn't alter the image. If the original has poor resolution, then the zoomed up version would still have poor resolution, just with bigger pixels.
The "lossy" way, however, is to use some form of interpolation or machine learning to fill in those NxN squares with some suitable color values. This does alter the image. Sure, zooming in on a large-resolution image won't be too much of an issue as long as the details you're zooming in on aren't too small. But zooming in on a poor resolution image, e.g. something happening far away from the camera, could lead to the interpolation or machine learning algorithm introducing details that aren't there or are misleading.
Example, take any image and downscale it until it is just a few pixels large. Imagine how much data was lost in that procedure, and now imagine that the zoom algorithm is supposed to upscale it back to the original from the downscaled image. Neither the lossless method nor the lossy method can be expected to produce a faithful replica of the original from that downscaled image.
In technical terms, you are limited by the sampling frequency. Upscaling isn't going to perfectly recreate the higher frequencies.
Therefore, great care should be taken when presenting upscaled images to the jury. If the original is too small a resolution, then the upscaled image can be unduly prejudicial.
EDIT: Another way to put it is to compare it with a magnifying glass on an analog image. Zoom in enough and eventually you'll start to see the printed dots or the splotches of chemical products from exposure to light.