It's clear that Sequence mode gives the fastest trigger re-arming, because no data processing and display happens during the acquisition. But is there a way to estimate the waveform update rates which can be obtained without Sequence mode? E.g. in hansibull's present application, if we also select 1 kPts per capture, and of course switch off any math or decoders -- should the scope be able to keep up with the 48 kHz packet rate?
It depends on so many things, I’m not aware of any formula to estimate the maximum trigger rate. There are only rules of thumb.
While irrelevant at slower time bases, sin(x)/x reconstruction or x-interpolation respectively do play a role as soon as the record length approaches the screen width (or gets even shorter). For highest trigger rates at 50 ns/div and below, dots mode is to be used – which isn’t a bad choice in many cases anyway, since it also avoids any reconstruction errors and always provides a true picture of the waveform – as long as the triggering still works okay, that is. If you’ve studied the bandwidth & aliasing application note I’ve once linked to, you should know this already and have gotten lots of demos for this.
While we can get over 100k waveforms per second at 50 ns/div in dots mode with edge trigger, my test only yielded a ~41k trigger rate when using the pulse trigger under these conditions. So it just isn’t enough for the application presented by hansibull.
Is there a fixed time overhead for rendering one capture on the display, or an overhead time per acquired point, which could be used to quantitatively estimate the update rate? Beyond serving the display, do any additional overhead times need to be considered (in non-sequenced mode)?
The trigger re-arming is constant and fast for the simple edge trigger, at about 2 µs. For more advanced triggers and all the more so for Zone trigger, there is additional overhead from the complex trigger handling.
On time bases >50 ns/div, very little reconstruction is required, hence the time will just increase with increasing amount of data, as expected – and in Sequence mode our expectations will be met somehow. In normal display mode, we have to handle intensity grading. This is simple as long as the record length does not exceed the screen width, because we don’t need to use counters for every single acquisition. If the amount of data increases, a lot of samples might be mapped to a single dot on the screen, even for a single record. This requires quite some processing power – and time.
We generally tend to forget that
all data is mapped to the screen and the intensity or color grading has to be processed at this point too. It certainly makes a difference if we just transfer the data to the screen and forget it, or have to maintain a bunch of counters instead and then finally calculate the correct intensity for every single dot. Of course, there is HW support for this task, but since there isn’t a dedicated GPU, it still takes a little time.
At time bases 50 ns/div and faster, the rendering in vector mode takes quite some additional time – and the intensity grading has to be calculated for all the additional dots on top of that.
What's up with the non-monotonous time depency in normal mode, at the fast timebase settings? I would have expected things to get monotonously slower as the time base is slowed down and more data points per capture have to be processed. Is is understood what is driving the observed waveform capture rates, with maxima at 5 ns/div and 50 ns/div?
I hope my explanations above give you an idea already. The sudden need for counters above 50 ns/dev to manage intensity grading alone should explain a lot…
A possibly stupid question: What does Sinc interpolation in dot mode mean or do? Your data show that it affects the acquisition rate massively at the fast time bases. I would have expected that Sinc interpolation only comes into play in line mode?!
Sinc reconstruction as well as X-interpolation do nothing with regard to screen rendering in dots mode, but Sinc is still required for determining the exact trigger point. So this is some additional processing time at fast time bases.
I strongly suspect that the numbers acquired be rf-loop might be outdated. Some time ago, there have been several bugs in Siglent’s DPO engine, particularly in the SDS2000X Plus. These should be long fixed and I do not expect to see a difference between Sinc and X in dots mode anymore. And maybe we get a better performance today overall.
I will try to do some measurements with the current firmware, but will publish it in the correct thread – did you notice that this one is for the SDS2000X HD exclusively?
The fact that all modes (dots vs. lines, Sinc vs. linear) get the same performance for the slower time bases, from 500 ns/div upwards, is probably due to the fact that the scope decides not to do any interpolation, since the dot density is high enough anyway?
The answer should be obvious now: mostly yes!