Also, I note that none of the Agilent/Keysight scopes will do Dallas "1-Wire" serial protocol decode.. Seems like quite an oversight.. I had to get a Picoscope just to do this..
I wonder if they will ever release a tool to allow us to write our own protocol decoder.. I think the HP/Agilent 16700/800/900 series of logic analyzers had this feature..
The serial decoders are built into the ASIC silicon.
They could add it on the CPU code side a guess, nothing stopping that, but it won't be quick. Not that 1-wire quick, so likely not a problem.
Nowadays an ASIC or FPGA isn't the only viable solution to get signal processing done quickly. A GPU is very power efficient, dirt cheap and very versatile. I think there are quite a few DSOs out there which leverage GPU processing power resulting in a much better performance for a wider range of functions at lower NRE (development costs). But even then, the protocol decoders are likely running on a dedicated piece of programmable logic inside the Megazoom ASIC. I highly doubt it is 'hard-coded' as this would require dedicated logic for each protocol and no way to fix bugs / add features & protocols afterwards.
GPU thing sounds fun, but practice proves different. GPU might have math capability but has no input data throughput, at least on many implementations out there. It is powerful, but in a wrong place of the data path. Many CPU have powerful DSP blocks/instructions and that is used instead. If someone made a FPGA where GPU block had full BW of memory interface and you could make a pipe: ADC/Trig/decimation block/GPU/CPU and memory to be at same level then it would be interesting. But as in crypto, as FPGA DSP blocks are competing with GPU, so people simply use DSP blocks inside FPGA.
Protocol decoders are probably not
fully running inside Megazoom ASIC. They never had, even in M.Z. IV. There is no need for it. Hardware only did tokenization, but final decode for screen was in software.
But said tokenization accelerates things many folds (you don't need render time scanning of full analog data to depict digital states), and also was (is?) done on full sample rate, preserving digital BW even when analog one plummeted because of small sample memory. If they are doing same thing with new scope, it might be able to do seriously long decode captures, provided they didn't limit it in number of packets.
It is a brand new platform, seems to be based on parts of EXR type of software (Fault finder for instance), brand new visuals and architecture. New hardware architecture is based on new front end preamplifier (ASIC?), new ADC (separate chip), based on their previous 10 bit converters, reconfigured for more bits and less sample rate. And there is logic chip that holds Megazoom V "technology", which happens to be ASIC.
So this is not same as old Megazoom that was single chip scope that you only needed to add human interface. This scope (funny enough) by it's architecture is chipset based, VERY much alike what Rigol did.
And also at this point this is also what Siglent does, except digital ASIC is in FPGA, and ADC is off the shelf.
It will be interesting to see, when in few years we see yet another generation of FPGAs, whether MZ V ASIC will still be faster for logic function......
Like I said, it is all VERY new. And Keysight released VERY early, some basic functions are still missing..
What it really can, and how well will it do, we will see in year....
Yes, I think it will take at least a year to get this new line to Keysight's own highly set bar for software/product quality.
And then we will really see what these scopes really can do.
As for old Infiniivision "everything just works" refinement, it will take years to get there. You cannot jump steps. It takes time.
But if Keysight keeps platform longterm, they will get there and in long term it will become a good product.