Well you learn something every day
Indeed
Yep, another of the corners for nctnico's atypical use case, rejection of zoom mode. You went with the obvious/logical use case!
Well, I start to see why Nico is doing it the way he does, probably because this is a reflection of his general approach to a measurement problem. I still remember similar discussions we had about the importance (or not) of Peak Detect, another feature he seems to consider crucial
. I guess this is just his way of doing stuff, and if it works for him then I'm not going to tell him to do anything else.
But while I believe I understand (to some extend) why *he* does it this way, I still fail to see why any real-world advantage of this method (and even less so where the claimed increase in efficiency or time savings should come from). The fact alone that this only works on some scopes (of which most only do this with a single acquisition) while the standard "capture long and then zoom in" methodology (for which deep memory scopes were designed for) works on every scope makes it impractical in a professional setting where you have to work with various different scope models.
Personally, it also doesn't fit me because it sounds a bit arbitrary (like blindly poking around in some circuitry), as I tend to think about what I want to measure and what I expect to see, and then setup the scope accordingly. But that's just me (although most of our engineers are the same "think before you do" types). But hey, whatever fits you best.
The problem I see is that this method is still rather niche, and due to it's various limitations over the standard "capture long and zoom in" method which is widely used it's really not a relevant criteria for a scope unless the user specifically asks for it, and it should not be treated this way.
Zoom mode adds one very important control that is missing when letting the scope expand around the visible window: you gain control of where the trigger is located within the full capture depth.
Not only that. Because there is also the problem that the excess data that even some of the scopes record is not always available.
Take the Rigol DS1000z. We now established that, if you set the memory to manual then it will always record the full selected memory size, no matter what. Great, right?
But here's the thing: on the DS1000z at least, you can't always access that data. In RUN mode, if you switch to zoom mode, you can still only zoom *in* on the signal on screen, but not zoom *out* to look at off-screen data.
You can, of course, change the timebase, but that just means the screen will show the new timebase length after the next acquisition (i.e. the data of the previous acquisition are gone).
You can, of course, also setup a trigger to a specific event, and then after the capture change the timebase to effectively "zoom out". However, for this to work there must be no subsequent acquisitions, or the data you want to look at is overwritten (and if that happens after the change of timebase then in effect it's no different to a normal timebase change on any scope).
The test Nico described is essentially this, trigger to a specific event, then stop the trigger and then "zoom out".
Which, consequentially, means that you can *only* look at off-screen data if the scope is no longer acquiring. It doesn't matter here if this is because this was the last in a series of acquisitions (with no subsequent acquisitions following), or if the scope was in SINGLE mode.
In contrast, the standard method of "capture long then zoom in" works irrespective if the scope is halted or still acquiring. With zoom, I can watch live data, jump to a different part of the signal and watch other live data. With Nico's method, I have to acquire, then make sure the scope is not re-triggered and then "zoom out".
With the standard method, on better scopes, I can even have multiple zoom windows, each showing a different segment of the signal. Single or continuous, doesn't matter
Now, let's look at Nico's example of the SPI frame:
He's setting the scope to max memory, set the trigger to the interesting data segment and set the timebase so that data bit is full screen.
Now if the trigger event is rare or unique (say, the data segment has to reach a specific value) then after capturing the acquisition will essentially stop (waiting for the next trigger). If it's a common event then the scope then the acquisition must be stopped manually as otherwise the outside display data would be inaccessible.
OK, let's say we have looked at the data segment, and now want to see the rest of the frame (which is outside the screen). So we change the timebase setting to the full frame length (whatever that is) and yes, we can see the data that before was off-screen.
But the thing is that, at this point, our scope is now set to *exactly* the same timebase setting that we would have set it to with the common "capture long and zoom in" method
. Just that the latter would have also allowed us to see both the whole frame as well as a detailed view of the data segment of interest without stopping the scope, i.e. we could observe live behavior. Something which, at least on the DS1000z, isn't possible with Nico's method.
So in the end, Nico's method seems to be a convoluted way to avoid using the scope's zoom function while achieve the same which can be achieved with the standard method.
It's not just with the DS1000z. I suspect all Rigol scopes behave the same (i.e. don't let you zoom outside the screen unless the scope is halted). HPAK's MegaZoom based scopes (InfiniVision and 546xx) for sure do.
Can the R&S scopes "zoom out" to off-screen data in RUN mode, or does it need to be stopped? I don't know (maybe someone else can try)
In any case, Nico's method sounds more like a pain in the ass than a solution to a problem which actually exists.