* If you're dealing with 32-bit samples, a true 32-bit DAC is likely going to yield a better SNR than a 24-bit fed with samples down-quantized from 32-bit to 24-bit;
Do you have any examples where that is the case and it is not just marketing? TI's (Burr-Brown) highest SNR audio ADCs and DACs are 24 bits and have a higher claimed SNR than the "32-bit" converters from AKM.
To make sure my point is fully understood:
1. I am assuming a 32-bit DAC and a 24-bit DAC with the same SNR.
2. I am explicitly talking about dealing with 32-bit samples, which, if it was not clear enough, means samples that are natively generated as 32-bit numbers.
3. If you have 32-bit samples, and feeds a 24-bit DAC, you have to down-quantize. Merely truncating creates truncation distortion which is rather nasty. The usual way of mitigating this is to *dither*, which basically is adding some random noise to the signal. It's a trade-off. Usually less annoying than pure distortion, but added noise nonetheless. Then end-result will be noisier than the 32-bit samples directly fed to a 32-bit DAC with the same SNR.
4. Of course, if you're dealing with 24-bit samples, using a 32-bit DAC wouldn't make sense. Actually, you'd get worse results too since you'd have to up-quantize here, which would inevitably make the signal noisier, even if just a little.
5. All in all, it's best to use a DAC with the same resolution as the samples you're feeding it with. Any change of quantization WILL add noise.
6. Some may question the benefits of 32-bit samples to begin with. And if you're considering recorded music played on an audio player, you'd be right: there is no native 32-bit material available. But there still are use cases for this. For instance, digital mixing consoles, or any device that does DSP with a native width larger than 24 bits.