Increasing the memory depth of those rigol scopes further reduces their acquisition rates, same with the Tektronix examples David is talking about, its like this with most scopes and well known.
Thanks Captain Obvious, but the point was not if scopes get slower with larger sample memory (they do) but the why. David seems to believe it's because of processing, but in reality this is simply down to basic math.
There is no basic maths you can apply to determine how fast a particular scope will update
There's basic math which tells you how long it takes to capture a specific segment, a time that no scope no matter how fast is able to beat, which is
T
capture [seconds] = size
memory [Samples] / f
sampling [Samples per second]
Knowing this, it should really be no surprise why a long memory scope will take more time to complete an acquisition cycle than a short memory scope.
Lets go back to your point where this started:
They could not even support their longest record length without reducing their display update rate noticeably so they allowed shortening the record length even further.
Please explain how a scope should maintain the same update rate in small memory (say 4k) as in large memory (say 4M) when by the laws of physics and math at a given sample rate it takes 1000x as long to fill the large memory than to fill the small memory? Of course the update rate will drop when using large memory, unless your scope uses HPAK's trick of using only small memory and only making the last acquisition a long one?
Ideally you wouldn't have to compromise on memory depth, it would always be as deep as possible for the horizontal window. You are limited by sample rate for short captures, and memory depth for long captures, but in the in-between where neither is limiting people still choose to have a shorter memory depth than they could capture because it slows down aspects of the scope such as the waveform display rate.
Correct, which again should be no surprise to anyone who knows about the mathematical relationship between time per acquisition, sample memory size and sample rate, as demonstrated above.
You can measure this so I took a rigol 1054 and did the comparison setting both scopes to 50us per division:
[...]
The theoretical zero blind time rate is 2000 wfms/s for the Keysight, and 1667 wfms/s for the Rigol (extra 2 divisions horizontal display). They're all maxing out at 1GS/s for this test but the Keysight gives you no options to change to other memory depths, while the Rigol with all its choices fails to match the realtime performance. It even lets you choose longer depths that are captured outside the display but not shown until you stop and zoom around the capture, at the shorter memory depths the Rigol is dropping its sample rate and not putting 1GS/s data onto the screen which is why comparisons need to be made carefully. Processing (and/or memory bandwidth) is limiting the ability to draw more information to the screen and the reason why many scopes offer the choice of shorter memory depths.
That is all great, but again you miss the point. This was about long memory being useful or, as David Hess claimed, just being a marketing gimmick. No-one argued that a scope must always be run at max memory settings, if you believe this then this is just your interpretation. The discussion was if long memory on a scope is a needed feature, for which I argue that in this day and age, yes it is.
Besides, the point about processing David and I have been discussing earlier was pretty much about what processing was available in comparable scopes back then. Comparing the performance of one of the cheapest bottom-of-the-barrel scopes on the market with a not exactly cheap upper entry-level/lower mid-range DSOX3k with dedicated waveform ASIC (MegaZoom) and then concluding that the Rigol is limited by processing power (what a surprise!) has nothing to do with what David and I were discussing, and is also a bit silly, really.
If you want to capture a specific length of data sure, its nice to have the controls available and I did mention that is one corner case.
Considering that memory controls are standard on the majority of scopes I doubt it's just a "corner case". I use it regularly in a wide range of situations, as do my colleagues.
But in general what people want to capture is a length of time and they would like to have as much memory and sample rate as possible, but for most scopes thats balancing against realtime waveform update rate. Or the user needs to capture elements with a particular frequency so they are constrained in their lowest possible sample rate, again the tradeoff appears.
Probably right (at least for standard measurement situations), but most newer scopes that offer sample memory controls also offer an automatic mode which only uses enough memory as required to maintain a high waveform rate, so at least with these scopes this isn't really an issue.
Not all do, like the Tek MDO3000, and it can add to this scope's already overall very high frustration factor, but scopes like that are thankfully in the minority.
Or we can take the Keysight X series scopes where they provide no choice, but there are so few cases where you would want to have shorter memory depths on them that that it seems reasonable they left the option out.
You're right, but in cases where the auto selection isn't good enough usually you'd want
more memory, not less. I guess the reason the DSO-X, like any InfiniVision scope right back to the old HP 54542A/D from 1995, lacks memory controls is probably down to their MegaZoom ASIC, which (like the comparatively small memory) wasn't considered a problem for a scope that was optimized for very high update rates at the cost of pretty much everything else. After all, any other HPAK scope back then and today does have sample memory settings, and newer ones also have an automatic mode.
I find this is much easier to work with than memory depth as I'm generally concerned about the frequencies being captured, not the specific length of memory being used to do this. Again, when you always get as much memory as possible used in the captures you can forget about that parameter and focus on the ones that matter to your specific situation. Yes, going to a single capture doubles the memory depth in many situations (but not all) but when looking at the signal I can quickly asses if the sample rate is sufficient for the information I want to see and adjust the controls accordingly.
Fair enough, and I can see why not having to care is practical in most standard measurement situations. But again, that is true for most somewhat newer scopes that offer user-controllable memory, as they also have an automatic mode. This isn't an either-or decision, these days you can have both, automatic memory management for everyday and manual controls when it's needed.
Anyways, this discussion wasn't about the DSO-X, which in the context of sample memory still counts as a deep memory scope, it was about if long memory is a worthwhile feature to have or not.