...so if you dither audio that's been (dynamic range-) compressed with the mu-law algorithm, then expand it, it ends up with
more noise, not less. Dither was invented to lower the perceived level of noise floor caused by lowering bit depth, so the fact that it's actually making the noise worse is mind-blowing to me. I guess most dither algos assume a linear response curve, so feeding them a non-linear one causes Problems™, presumably.
I'll demonstrate with a sine wave, just because it's the easiest way to show it visually. Here's a sine wave, along with 3 versions subjected to different dithering methods, "no dither", "triangular dither", and "Shibata noise-shaping dither". They were then compressed with mu-law, reduced to 8-bit, then expanded.
I'll then use a notch filter to remove the 440 Hz tone, leaving me with only noise.
I'd show dB readings for the noise, but I feel like I've already wasted too much of y'all's bandwidth. I'll summarize:
Original: below -90 dB(FS).
No dither: -33 dB.
Triangular dither: floating between -27 dB and -24 dB.
Shibata dither: -6 dB.
It all falls to shit the moment you start using dither, for some reason. The overall effect that this noise produces is that sharp, piano-like tones tend to sound slightly crunchy, as if it's clipping. But it isn't, it's just noise. I went a bit mad trying to tell if it was my settings, but no, it's just the noise inherent in the system. I should copy Dolby B noise reduction next, considering the patents on that have long-expired. Using a system intended for telephony was a bit of a mistake.