From what I understand, the reason for this is that sin(x)/x attenuation will give you aliasing if the original signal contains components above the Nyquist frequency (the higher components will fold back into the pass band). Linear interpolation is cruder but will be less wrong in that it won't introduce false peaks at slower sample rates - although it will introduce discontinuities in the gradient.
It will also show aliasing from nonlinearity in the ADC even with input signals below the Nyquist frequency. The sampling frequency mixes with the input frequency and some of the components produced are likely to be above the Nyquist frequency. This shows up as "wobulation" and may be especially bad in oscilloscopes which use interleaved ADCs.