That LeCroy document agrees with what I wrote and the various posts from Wurstunhund but I wonder if they did it differently in the past. I did not get into that much detail but breaking the acquisition record up into cache sized chunks is an obvious thing to do to maximize processor throughput which is very important if the processor is doing the heavy lifting which is LeCroy's thing.
From what I read it appears the principle is the same for all lecroy scopes, even the very old ones from the 1980s. They seem to be a bit like a separate acquisition box that connects to a PC, i.e. the box does one thing which is aquiring the signal and writing the sample data into its sample memory, and it's up to the PC to do something with the data. While other scopes appear to manipulate the data in the sample memory.
As Wurstunhund has pointed out, The LeCroy way has the advantage of always working with the entire acquisition record but I am not convinced the trade off is worth it.
I don't know. On college we had those Tek scopes (the ones with the built-in spectrum analyzer) and when we searched for problems we switched them into a special persistence mode (Fastac?) to look for anomalies, and when we found something we used trigger to pin it down (so we could measure it and see when the problem was gone). My first own scope was a DS1054z and I used it the same way. Then I wanted more bandwidth and bought the LT264, mostly because it was cheap. At first I couldn't get on with it so I googled around to see if I find a manual, and I found Mr Wurstunhund's messages. He seemed quite adamant that a better way to find problem in the signal than persistence mode is setting up the scope so it 'knows' what your signal is supposed to look like, and have it trigger on the anomalies. Which sounded way too difficult but I tried (I did find a manual eventually), and now most of the times I get by without persistence mode, just setup the scope with the signal parameter, and let it trigger on the anomalies. With sequence mode and history I even get a table that gives me a timestamp of each occurrence. I can also add measurements so I can see immediately what each anomaly looks like. And because it works through the trigger there's no blind time like on persistence mode.
If I use the LT264 (or my 'new' LT574M) like those Tektronix scopes then I don't get very far if the anomaly is rare.
If I use it the way Mr W has suggested I'm very sure I could find any anomaly in my signal no matter what right the first time it occurs.
Is it's worth it, I don't know. For me it was because now I can make better use of the functionality in my scopes, and get better results faster. But it's only a some years when I left college so I have used the standard method only for a modest period of time which probably makes it easier to switch to a different method.
There's obviously a learning curve. I'm good for static signals but still learning how to setup the scope for signals that are changing but I'm slowly getting there. It's amazing what these old scopes can do.
I also like that I can apply some function and then try another function without the need to reacquire the signal because the original waveform is always in the sample memory.
My 'new' Infinum 8064 has different acquisition modes, like the Tek scopes from college or my DS1054z. If I use anything else than normal mode then I won't get the original waveform data in sample memory.
I was referring to the other non PC scopes, i forgot they used the MegaZoom branding on the PC ones too.
I don't think they use the same kind of MegaZoom ASIC in the PC based scopes. The ones that don't have PCs inside, 6000, 7000, X2000, X3000, X4000, X6000 etc that have the super fast update rate are a highly integrated solution where basically everything is done in a single chip. The ADC feeds the ASIC and that spits out the image to the screen while the CPU sits on the side doing its thing with the menus and i/o interfaces.
The PC based scopes on the other hand are not all that highly integrated. There are many large chips with heatsinks doing various tasks and all the sample memory is external in the form of a large bank of dynamic RAM with a really wide bus to the acquisition ASICs. There are also always huge FPGAs to be frond on the board that ties all of it together and shovels the data back out to the PC.
From what I've read the MegaZoom in Infinums and Infinivision(?) scopes is practically the same, the difference is that the Infinums have much more memory (up to 128M) compared to the 4M in the Infinivision scopes. I think Mr Keysight_DanielBoganoff has explained that in another thread but I can't remember which one.
But based on what I read in those old Infinum scopes the waveforms are created by an ASIC and overlayed onto the Windows app. It appears the early scopes used some hardware method while newer Infinums like the 8000 use software overlays.
But it appears for scopes of this class waveform rates are mostly irrelevant because they have much better tools to get the information you want.