I am generating a 18MHz SPI signal (complete loop from Frame 1 to Frame 8 is 90uS, so I have 11,111 loops in 1 second):
Frame 1: "0"
Frame 2: "1S"
Frame 3: "2Si"
Frame 4: "3Sig"
Frame 5: "4Sigl"
Frame 6: "5Sigle"
Frame 7: "6Siglen"
Frame 8: "7Siglent"
And I am triggering on the value 't' (0x74) and the Siglent SDS1104X-E is able to trigger and the displayed decode info refresh rate seems comparable to the Keysight EDUX1002G and the GDS1054B.
The only "issue" for me is still the LAG between the waveform display and the decoded information. The GDS1054B uses the same ZINQ device and is able to update the decoded information very rapidly, I assume Siglent can fix this LAG if they look closely into it.
The Keysight 1000X (and I guess it applies also to 2000X, 3000X) SPI trigger needs the value for the full SPI frame. As I am triggering on the letter 't' (Frame 7), the full frame has 8 bytes, you need to specify the values for all 8 bytes. Fortunately you can indicate all X except for the last byte to be 0x74.
I have 11,111 loops (Frame 1 to Frame in 1 second, and the letter 't' appears once per loop, so I should have 11,111 triggering events.
The SDS1104X-E can trigger 334 to 1000 per second, so it is finding around 10% of all 't's in a second. Similar for GDS1054B.
Keysight EDUX1002G can trigger approx 11,000 (Trigger out shows a measured frequency of 11.05KHz) times per Second, so it apparently is finding all the letter 't' in a Second. Really hardware decoding makes a difference.
If there is a random byte in the SPI stream, it is probable that the Keysight will be able to trigger on it, but the chances for the SDS1104X-E and GDS1054B to capture it is about 10% of the Keysight.
My next test will be generating a specific byte like 0xFF at a random interval and see which scope can catch it. I think I already have the answer...
Those are good measurements and info.
I'm curious... What happens on SDS1104X-E and GDS1054B if you setup SPI trigger, but don't enable decode?
Will trigger rate still be that slow?
What I'm trying to say is that software decoding (if properly implemented) shouldn't have any impact on trigger rate. Decoding and display should be in a completely separate display loop that should skip (decimate) decode data that it cannot show in real time, but triggers and waveform display and save to history segments should keep on running without slowdown.
So if it can't keep up with screen refresh it doesn't matter, but when you stop acquisitions, if you go into history buffers it's all there.
That would be a good compromise for a software decoding scope, that would guarantee that you won't miss packets that are too fast to see anyways...
And if you are looking into waveform, you will see something is changing so you know you need to investigate trough decode table.
Randomly sent specific packet will be detected at the same rate under same settings. If it is spaced to more than 1 ms apart it will be detected 100% of the time, if closer than that, only first one will be detected (triggered on).
But, as I said, I would like to see what is a trigger rate without decode on.
Also it would be nice to try with fast segment mode on. And I will tell you why: On 3000T I pretty much use segmented mode all the time if I need to capture multiple packets because of very small memory. If you don't use segmented memory you can barely capture few packets.
And on Keysight segmented memory behaves same as fast segments on Siglent: screen is blank until it captures all segments.
All of this is actually my point: you are benchmarking 3 scopes to see how fast are they doing specific test. But that particular test has limited relationship with real time usage, same as synthetic benchmarks on a computer.
To summarize so far:
1. You proved that enabling decode on the on SDS and GDS slows down triggering rate. I agree it doesn't have to be implemented that way, even for software decode.
2. You noticed that SDS has noticeable pause displaying data that similar hardware on GDS doesn't. I agree Siglent could optimise that If GW Instek could.
3. You proved that hardware decoding Keysight doesn't slow down triggering rate if you enable decodes. It agrees with their specs and marketing.
That brings us to these observations:
4. You couldn't visually see any difference on display that would show SDS and GDS and Keysight had any difference in refresh rate.
5. You had to measure Trig Out frequency to actually figure out trigger rate. Super fast Keysight
looked pretty much the same on the screen.
6. So only useful info from that measurement is not decoded data from 11000 frames, but only trigger frequency. Which is useful to, for instance, measure how many times a second sensor sends specific data. In which case you
don't need decoding. You setup triggers to SPI and don't decode. Just measure trigger frequency. Of course, that is if SDS is dropping trigger rate because of decode and not for trigger itself being slow.
7. If you wanted to actually capture those 11000 frames to verify something, you will have to put all of them in segment mode. In which case (a fully unlocked) keysight 1000 will have a maximum of 50 segments. SDS supports up to 80000 segments.. So pretty much no limit. GDS seems not to have segmented memory officially, seems that hacked one does..
So that is actually what I want to say: if you are just looking at the scopes display you won't see a difference (apart from that lag) in screen refresh rate. Your eyes can't see a difference if scopes are triggering 1000x a second or 10000x a second. You won't see individual packets. You have to measure triggering frequency to know the difference, in which case you can simply disable decodes.. Because you can't read all of the 10000 packets in that second. On all of those scopes you will see just random packets that happen to coincide with screen refresh rate, and when you STOP you will see last one. On all of them.
With SDS you have the option to set triggers with no decodes, capture 10s of thousands of packets, stop it, enable decodes and then have all of them decoded, and with exact timing information for each one. You will have to do the same on Keysight but will have a max of 50 segments.
Don't get me wrong. 50 segments is plenty for most of cases. It's just that when we are comparing different designs, we cannot compare them directly. Every architecture will have it's strengths and weaknesses, and different ways to do same things and extract maximum from the instrument.
Because of much bigger memory, on Siglent you might as well just grab one single huge chunk of capture with hundreds of packets inside and just use search to find packets you want..
Keysight has 2GS/sec sampling rate (which helps with aliasing) and because of hardware decodes it doesn't slow down when you enable decodes.
Everything else is seriously better on Siglent. Memory, segmented memory(it is much larger that basic buffers), history buffers that run all the time, FFT, measurements that can be over whole memory buffer (Keysight runs over decimated subbuffer, even on 3000T series).. GDS will also be much better, except no history and only one 1GS/s A/D, and some other stuff it has like digital filters...
So depending of how you decide to use instrument, one or the other will be better.