Just some general info about the Rigol DS100Z series. As far as I know/experienced, most calculations & decoding are done
with the data that's actually visible on the screen, not with the raw captured data which can be as big as 24Mpts.
So, what's the difference between the raw waveform data (with sample rates up to 1GHz and and a size up to 24Mpts)
and the display data?
The difference is that the display data is
always downsampled to
1200pts (100pts per horizontal division).
Those 1200pts are used for serial decoding, FFT and (I haven't tried my self yet) probably also for all other measurements.
How is this downsampling done? Only Rigol knows. They claim it's a special (patented/secret?) algorithm.
This is a big limitation but understandable because of the priceclass of the instrument. Zooming in (horizontally) improves
things but doesn't let you be able to see, for example, a long serial transmission.
So, if you want to do analysis with higher resolution, the only solution (with Rigol) is to download all 24Mpts to
a pc and do the analysis offline.
And here's the sad thing, there's a "time difference" bug in the Rigol firmware that causes a random time difference
of +/- 100nS between different channels:
https://www.eevblog.com/forum/testgear/rigol-ds1000z-series-(ds1054z-ds1074z-ds1104z-and-s-models)-bugswish-list/msg862373/#msg862373It has been reported to, and confirmed by, Rigol a long time ago (Februari 2016) but in the
subsequent firmware updates, they didn't solve it.