There was a neat video showing the serial decoding on a high end Rigol scope vs an Agilent and Tektronix, and the Rigol was missing data every now and then.
I do agree that that was an interesting video. Interesting enough that I decided to go back, and take another look. Initially I had been focusing on the price differential between the 3 scopes. Which made it seem like a somewhat unfair comparison, to me. This time I noticed it was Part 1, so I went looking for Part 2, which turned out to be even more informative. Part 2 did some closeup shots, which made it possible to ramp up to 1080P, and see what modes the various scopes were running in. All were set for 100us/div, but
that was the only thing they had in common.
It's important to remember that these modern DSOs are powerful and complicated devices, and it's easy to overlook important details that have a large impact on their operation. And hence the results obtained, and the conclusions that one draws can easily be compromised. Here's how they were configured and operating*:
Scope Sample Samples Relative
Model Rate Displayed "Work"
Agilent 500MSa/s 500K 5x
MSO-X4154A
Tek
MSO3054 100MSa/s 100K 1x
Rigol
MSO4054 2000MSa/s 2800K 28x
So not only are the 3 scopes sampling at radically different rates, they're also capturing significantly different amounts of data**. Then they have to process all that data down, to create the 600-700 points actually displayed. And finally that data is processed through a decoder, to generate the protocol stream. The other 'disadvantage' the Rigol is subject to is that it provides 40% more display area, with 14 horizontal divisions, vs. 10 for the other two. Something that would certainly come in handy for displaying protocol streams.
In light of the above information, the fact that the Rigol is processing 28 times as much data as the Tek may help explain its "slowness". And not only does the Agilent have hardware decode that the others lack, it also has another 5.6x advantage over the Rigol, due to the smaller amount of data it's processing for display. Lastly, I found several things from the YouTube reviewer rather odd:
1) he emphasized that the Tek was keeping up with the decoded data, but made no mention that its display of the SCL and SDA analog traces were not collapsing and expanding, between Write and Read modes, as the Agilent and Rigol were. Basically both traces were nearly worthless.
2) he stated that the Rigol was not
capturing all the data, which was untrue, when it was actually only not
displaying all the decoded data. Those are two quite different things.
3) after already pointing out how slow the Rigol was, he then went on to comment on how it could be made even slower, by turning an additional decoder channel on? I fail to see what the point was in that, since he didn't turn additional decoders on either of the other scopes, to see what impact that may have had.
4) he finished by pointing out how unresponsive the user menus were, while the Rigol was busy processing gigasamples of data. An implementer has to decide, when there's too much going on to keep up with all of it, whether to give priority to data acquisition, display, or interactive user feedback. Rigol certainly could have decided to make the menu selection buttery smooth, at the expense of the other functions. Instead, it appears they've tried to strike a balance. Which, IMO, is the proper approach.
5) he concluded that the Rigol was underpowered, and needed a faster processor. That may be true, but his demo did not validate that claim.
Conclusion: don't take at face value everything you see on YouTube.
--
*NB: I was checking this on a small screen, and if I had scrolled down, I would have seen in the Comments section that Martin Zuber already noticed some of the same anomalies, and commented on the sampling rates and extra points the Rigol was processing.[** One other thing to note is that while the Rigol (and the Tek) continuously capture their full sample set while in Run Mode, the Agilent doesn't necessarily do so. This is a clever 'trick' they devised, which actually makes a fair amount of sense. Let's say they had a 4M sample deep buffer. Rather than capture all that on every cycle, they only capture as much data as is being displayed. In this case 500K. Then, when you hit Stop, the scope actually continues running, and captures the remaining 7/8 of it's buffer, so it can be examined afterwards.]