I don't really grasp what you mean by it being slow. I have a TDS3054 and a TDS784C and the only time I've ever noticed any kind of slowness in either one is using deep memory on the TDS784. The TDS3000 feels very snappy to me, what do I need to do to see this "painfully slow" lag you refer to? I'm genuinely curious and don't know what you're talking about.
It's not about 'lag' or general controls. There isn't any input lag when operating the scope. But unfortunately the user interface isn't everything.
For example, try mask testing on the TDS3000. Or FFT. The TDS3000 is also slow when it comes to waveform rates, as in normal mode it's trigger rate is some 450 wfms/sec. This raises to 3k wfms/s or so in Fast trigger mode but then the sample memory (with 10kpts not exactly large) is limited to a measly 500pts. It's not a big problem if you can make it with the available trigger suite (which is quite good if the advanced trigger option is installed) but that doesn't change the fact that the scope *is* slow, and when used in an 'analog scope' manner (like searching for glitches through trace persistence) then it will perform poorly.
Slow waveform update rates make it bad, got it...
You clearly didn't 'get' it. I didn't say that the low update rate in normal operation was a problem, I actually said it isn't (and, just for you, I highlighted above where I did that so you can easily find it
).
The point I was making is that the scope might feel OK if you twidle the knobs, it's still a very slow scope. And while the waveform rate isn't really a problem, the slow architecture is for tasks like mask testing, math or FFT.
It should also be remembered that the TDS3000, while looking a lot like the entry-level scopes of today, wasn't a an entry level or even particularly cheap scope (the 500MHz version without any options ran some $18k+, even the 100MHz 2ch base model was over $7k!). Back then in 1999 it's competitors were not common bench scopes like the Agilent 54622A (which was around $4k back then if I remember right) but other expensive scopes like the Agilent Infiniium 54800 Series or the LeCroy WaveRunner LT (and for the 500MHz models even the LC Series). Just to put this into some context.
So in which way is this different than any other high waveform rate technology like MegaZoom?
And while your trust in InstaVu is admirable, the reality is that even at 400k wfms/s your scope is still blind >90% of the time! Even scopes like the Keysight DSO-X3000T which achieve up to 1'030'000 waveforms/s are blind 89.70% of the time. Which means there is a 9 out of 10 chance your scope will miss an event on every acquisition.
Which means the *only* way to find rare events (or to make sure there are none!) is to use triggers
Wait, waveform update rates are useless? (others will disagree on this point). Wash my fur but don't get me wet?
I always said that update rates are pretty meaningless, yes.
The reality is there is a balance, triggers can find some sorts of problems, and realtime viewing others,
Nope. The reality is that glitch finding via persistence mode is a crutch from a time where scopes were so primitive that it literally was the only tool available. Sophisticated triggers as we have them today didn't exist, storage (where it was even available) was utterly poor, and measurement capabilities nonexistent.
Persistence mode does has its place but only where it is ensured that the events of interest occur within the actual acquisition phase, which means that some basic understanding of the event must have been established first. For example eye diagrams.
its all application specific and neither is better than the other for everything.
Simple math says otherwise. The only way you can be sure that you captured every event within the time period of observation is by using triggers.
Just to be sure, we're talking about "gllitch hunting", i.e. finding rare events. Persistence mode of course has some use for other tasks, i.e. mask tests.
You've been consistently coy about highlighting example applications or methods to enlighten us readers as to specific advantages.
What is there to highlight? With a good scope I can trigger of *any* kind of event, no matter what. Runt? Missing pulses? Too many pulses? Wrong data bits in a serial transmission? Slew rates outside spec? Malformed pulses? Anything else? Doesn't matter, with a good scope I can trigger on it. How, depends on the scope (that's where knowing your instrument comes in), but even a TDS3000, if it has the advanced trigger option installed, can go a long way finding stuff with triggers.
So I'm really curious as what kind of sporadic events you believe can only be found with persistence modes.
Ideally a scope would be capable in both areas, luckily those exist too.
Sure, for a standard entry-level or low-midrange bench scope (simple scope), but mostly because these scopes are often limited in what triggers they offer (although that is becoming less and less an issue, as even many cheap scopes offer a surprisingly versatile range of triggers) and because these are scopes which often fall in the hand of hobbyists and other people who want to treat it like an analog scope of back then (which will continue as long as outdated methodology is still passed on as 'best practise').
For anything above the lower mid-range the focus has always been on triggers and analysis capabilities, and high end scopes all came with often paltry trigger rates. Which, again, isn't a problem because no-one pays $20+ for a scope to start glitch hunting by staring at a persistence screen. Relying on persistence mode to find rare events is also completely useless for qualification, e.g. demonstrating the absence of a specific type of event, or even that the number of events is within a certain range.
Over the years waveform rates of high-end scopes have improved, but that is mostly a side effect of the need to process ever more data (generated by very fast ADCs often operating at increased resolution, and from the various analysis and processing tools) as quickly as possible. Technical progress already has the same effect on entry-level scopes, where newer models achieve respectable update rates without relying on special modes or proprietary ASICs, and this will only continue. At the same time, trigger capabilities of entry-level scopes are constantly improving, which means persistence mode glitch hunting is becoming as obsolete in this class as it has been for more expensive scopes.