Honestly I do not understand all what you say.
I'll be happy to clarify anything that you didn't understand. Please let me know what was unclear in my message and I'll explain further.
But lets's think we have oscilloscope what capture full 14 Msample even when you use 100ns/div time base and samplerate is 1GSa/s.
After you stop this scope, you can zoom out up to so that you can see whole length on the screen or you can zoom. In this case you can zoom out to 1ms/div down to example 5ns/div. In and out just as want and also scroll to what ever position.
Right. 14 Msamples with 14 divisions on the scope means 1Msamples per division when zoomed out as far as you can go and still fill the screen, hence 1ms per division.
But when you run this scope you just see only this 100ns/div and rest of captured length is outside display.
Yes, until you zoom out, that is what you see. However, if the nature of the waveform you're interested in is such that it shows best at 100ns/div, then the effect is that you
immediately see that which you're most interested in, and can then zoom out
or scroll (or some combination of those) to see other parts of the waveform which preceded the trigger point or which followed the trigger point.
Then stop this scope so that there is this 100ns/div screen visible (1400ns screen width in time). Now you can zoom out, up to 1ms/div, so that whole 14ms is visible. Just zoom in and out as you want. But, run time you never can see whole length as long as captured lengths is more than screen width. All are there in run time but you can not see, you are blind for these.
That's true until you stop the scope, of course, at which point you can correct for that by zooming/scrolling until you see what you're secondarily interested in (one presumes you're triggering on what you're
primarily interested in and have set the scope's timebase to display that in the most suitable way).
Then you think there may be some intersting and then you stop scope and now you can zoom out or scroll these details what you want.
Exactly.
Is it now good. It meet what you want. Now stop and zoom out or in and runtime lot of blind time.
Well, the blind time and the small history buffer (in terms of waveforms captured) are obviously tradeoffs that you have to be aware of when you're using the largest memory depth available for a single capture.
But let us improve your scope.
Run scope again with same settings. Now take this piece of carton out. Oh well, now you have scope what you previously want but with extra feature, now you can see also whole captured memory length. We now just add feature to your previous model.
And now it looks like this:
So the Siglent is capable of showing a "zoomed in" portion of the waveform at runtime, so that you can always simultaneously see the entire capture and the portion of interest even while the scope continues to capture waveforms? If that's the case, then the Siglent's implementation is better than I thought, provided that:
- The "timebase" of the zoomed view is shown (that does seem to be the case here -- excellent)
- The zoomed view can be configured to always show the portion of the waveform that triggered the capture.
If those attributes are not present in the Siglent, then what you're showing is still inferior to the mechanism other scopes use
for this particular use case. But if those attributes are present, then the Siglent's approach is as good as other implementations for this use case, and I'd then have to give props to Siglent for that implementation. It requires that one understand how the scope works, but that's true of any scope.