[...] So it's a disgusting soup of distortion. [...]
But is this distortion due to the compression (which does not have to add harmonic distortion or similar artifacts), or is it because that's the "sound" the producer is deliberately looking for? For example the use of auto-tune as an effect, and just in general, adds artifacts that really annoy me but are not the same thing as overload-related distortion. I would appreciate seeing an analysis of modern recordings -- I don't know for sure the answer here.
Compression is non linear by definition. Brick wall limiting can make itself really obvious.
The main problem is, they want it loud. So they push the levels to the limit, so loud that they even overflow the D/A converter.
Of course distortion is a legitimate effect to use in music creation. It's often used on purpose from subtle (microphones and preamplifiers with valve stages) to obvious (electric guitars and synthesizers). But the aim of the loudness war is not to offer a better sound or an intentionally distorted one, but being just louder.
A chilling "technical achievement" I heard is one of the latest releases by the Red Hot Chilli Peppers (monarchy of Roses). It seems to be mastered in a way that is intended to be played with the D/A distortion caused by overloading the D/A. Try reducing the volume digitally (ie, for example lowering the volume on iTunes) so that it doesn't overload the D/A and behold the result.
I found it interesting that "underload" was also a problem with early CD digitization, which added to the dynamic range complaints. With vinyl, very small signals would gently fade into the ever-present background noise (or hiss, with tapes). With CDs, as the signal became weaker the effective number of bits (ENOB) would diminish, adding harmonic distortion to these weak signals. At the lowest levels, you would have only one or two bits, essentially turning a sinewave into a squarewave. Once this issue was recognized, they started adding carefully-shaped digital pseudonoise to the digitizer, which shifted the artifacts up in frequency to where they could be filtered out. We've discussed a similar concept when analyzing the performance of low-resolution SDRs, where the normal atmospheric and thermal noise create the same improvement in ENOB. Modern CDs and other digital formats sound better because of this.
There is no free lunch, that's a universal law!
Dithering is a very good technique but of course you add a bit of noise. Noise that you can shape so that it's really hard to hear, so at the end of the day most of the noise you hear when playing music is not caused by the dithering process, but the analog stages themselves (microphones, preamps, mixers, equalizers, and your own playback equipment).
Anyway all of this technical progress has been thrown down the drain with the loudness war.
And no, nowadays producers don't want recordings to sound good on every device. They just want it loud!
But of course you can't generalize. There are properly mastered classical and jazz recordings out there. And many listeners will complain that the volume is incredibly low.