First acquire stopped.
1ms/div. 14k and 14M memory (but it use 28M memory, due to construction -> note 2GSa/s)
Then zoomed in this 14M(28M) capture so that separate captured samples visible.
(Also .CSV file proof that it really use 28Mpoints)
As you know, stopping and then zooming in or saving memory to file proves nothing about what the DSO is doing
when it's running. As the Agilent X-Series proves, the DSO can just fill the entire sample memory when STOP is pushed; what matters (in terms of waveform updates) is the real displayed waveform and how much data it contains.
Between every waveform is roughly 1ms time and also during this 14ms acquire time there can do many things.
Let's look at this idea closer. At ~67.5 wfrm/s, the total acquisition time is 14.8ms - so more like 800us between captured waveforms. That means, with a 28MB sample length, the DSO would have ~28.7ps of processing time for each sample (if ignoring the actual acquisition time) - or ~528ps per sample if calculating based on the entire acquisition time - which is still exceptionally fast. Perhaps the following chart would give a better idea about why I'm curious as to what the Siglent is doing:
Why it is so mysterious if Siglent do this. Perhaps Agilent have not even done any work for rise slow horizontal speeds update rate because "up to" rates are what sell....
Not necessarily mysterious - but certainly enough different from other DSOs to make it, IMO, worth wondering about.
one data pair is like this.. so there is around 28M data pairs.
This "proofs" that 28Mpoint is true. (also stop, zoom and count sample points proofs it)
Again, this alone proves nothing: only that 28M is filled with samples when the DSO is stopped.
With 50ns wide pulses it means that what there is on the scope screen displayed is "alias". Sampling period is 1us! Pulse width is 1/20 of sample period!
I do not fully understand what the idea of ??such a test? Is that the way it looks when the user does not know what he is doing with an oscilloscope?
The whole idea was to see the aliasing - looking for a method of determining how much data the DSO is actually decimating when it's running. As you've posted yourself before, rf-loop, a Trigger Out signal alone
proves nothing - other than the DSO is putting out a signal @ XX Hz. For example:
Who can proof me that in all cases example every trig out means also real displayed waveform and how much data it contain.
With the Rigol DS2000, I can capture time-stamped segments which indicate the waveform update rate with deep memory depths (no Trigger Out is needed). Is there a method on the Siglent to 'see' the waveform update rate besides the Trigger Out signal?
I can clearly see a fast update rate at smaller memory depths in the posted GIFs of Herman; as mentioned, I'm just curious about the deeper memory settings due to the speed of the throughput (as shown in the chart above).
28Mpoints capturing using 2GSa/s takes as long time as 14kpoints using 1MSa/s. Both need 14ms.
7kpoints memory with 500kSa/s also take 14ms
As already mentioned several times, it's NOT about acquire time: it's about post-processing time; i.e. 1.9GB of samples to display memory vs. 952kB samples to display memory. They don't require nearly the same amount of time to post-process to the display.
I wonder why Tinhead doesn't give his opinion here? I remember he had much to say before when discussing the idea of the Rigol capturing 35 wfrm/s with a 56M memory depth
So what happens when you set the memory depth to 14M @ 50ns/div. What is the Trigger Out frequency then? Still ~135 wfrm/s?
And what are the segmented memory abilities of the DSO? There doesn't appear to be any specific info in the released documents.