Misusing command-line text processors to databend images.
This is the source image. I currently know of
uwuify and rot13 (from hxtools), but I'm sure you could do something in grep or sed or whatever's cool with the *nix kids nowadays.
And with strategic use of pipes, you can automate this process all the way down to a single command. Look at this one that uwuifies the raw data of an image:
ffmpeg -i motherboard.jpg -s 1620x1080 -f rawvideo -pix_fmt gbrp - | uwuify | ffmpeg -f rawvideo -pix_fmt gbrp -s 1620x1080 -i - -f image2 motherboard-1080p-planar-uwu.png
Use FFmpeg to convert motherboard.jpg to a raw planar GBR file, rescaling to 1620x1080, and pipe the output.
uwuify receives the output, performs uwuification to it, and is told to send its output through a pipe.
FFmpeg receives the piped input and is asked to interpret it as an image with the same parameters at the start. It then outputs a PNG-encoded file of its interpretation.
That mess generates:
For some reason, that is a distinct effect from downscaling and saving as planar RAW in Irfanview, then running
cat motherboard-1080p-planar.raw | uwuify > motherboard-1080p-planar-uwu.raw and opening it back in Irfanview. My brain tells me that should be identical (bar some hue-shifting), but no, it's different enough that I need to point it out. I don't know why. That process yields:
Don't ask me why. I'm baffled at how these similar-looking processes can yield wildly different results.
The ROT13 versions really only produce great results with Base64 encoding first, so the command needs to be extended to:
ffmpeg -i motherboard.jpg -s 1620x1080 -f rawvideo -pix_fmt gbrp - | base64 | rot13 | base64 -i -d | ffmpeg -f rawvideo -pix_fmt gbrp -s 1620x1080 -i - -f image2 motherboard-1080p-planar-base64-rot13-base64.png
Use ffmpeg to convert motherboard.jpg to a raw planar GBR file at 1620x1080, and pipe the output.
base64 receives FFmpeg's output, and encodes it to Base64, redirecting its output through a pipe.
rot13 gets base64's output and does ROT13 to it. It then redirects its output through a pipe.
base64 gets the last command's output and (with the use of the -d switch) decodes it back to binary. The -i flag is used to ignore garbage characters. It then sends its output through the final pipe.
ffmpeg receives the piped input and is asked to interpret it as an image with the same parameters at the start. It then outputs a PNG-encoded file of its interpretation.
Your
base64 implementation may differ depending on your particular system, and you may need to change the switches a bit; consult your documentation for that. This time, the Irfanview and automated methods more-or-less agree, so I'll put them next to each other.