Well newer units can reach triggering frequency of about 35 Hz, depending on scope settings, while old units go about 25 Hz. Do note that the scope is optimized for large memory sizes, so unlike most scopes, decreasing amount of used memory will not drastically increase speed.
How fast is that depends on what you want to use scope for. The story is actually extremely complicated, but I'll try to simplify it as much as I can.
A traditional oscilloscope works by connecting the voltage of input to Y axis on the CRT. The X axis is driven by timebase. It takes some time for the scope to draw waveform from right side of the screen to the left side and that's what you control with timebase knob. There's also some time which is needed for the beam to return back to the left side of the screen and during that time, you don't see the wave. This is called deadtime, if I remember correctly. On analog oscilloscopes, it's usually very short, so it isn't much of a problem.
On a traditional digital scope, instead of electron beam, we have an analog to digital converter. It takes some time for it to capture data and that time can be (more or less) thought of as the time during which an analog scope is drawing the wave on the screen. Once the ADC fills the memory, it stops capturing data and then the data is processed by the scope. Once the processing is finished, it's displayed on the screen. That time period can be thought of as the time which it takes for the analog scope to move the beam from the right side of the screen to the left.
The main difference is that the time it takes for DSO to "re-arm" is much longer than on a CRO. Let's calculate how much time new SDS7102 would cover. I mentioned that it has around 35 waveform updates per second. I'm using 10 Msample memory and timebase of 500 microseconds. I think that this will give me greatest time coverage at highest sample rate. When I press single-shot button, I see that the scope has 10 ms worth of data in memory. So if it updates 35 times per second, and each update gets 10 ms, we have coverage of 350 ms for each second. That means that in each second, there are 650 ms during which the scope is effectively blind.
The result is that should the effect you're looking for happen in the part of the second which is in those 650 ms, you won't see it on your scope at all. It may take quite some time to actually catch the abnormality you may be looking for, if it lasts for a short amount of time.
What you can do to mitigate the issue is first to try to set triggering on the scope in such way that the scope triggers on the abnormality and not on something else. It may, however, be very difficult to set the trigger in such way that you catch the glitch, especially if you don't know if the glitch exists in the first place and how it looks like. Even if you know that the glitch is there, depending on the glitch, it may take hours or maybe even days until the scope discovers it.
Second option might be useful if you're looking for slow changes on the signal. You could lower the sample rate so that there is more time between each individual sample. This way, you're changing the coverage of the time. Instead of having short groups of tightly packed samples, you now have longer groups of samples, but there's some extra time between each individual sample. For example, using sample rate of 2.5 Msamples per second and 10 Msamples memory, I managed to capture two seconds worth of data on my scope. When I "zoom in", that is to say increase the time-base of the scope when looking at captured data, scope stops providing interpolated signal at time base of 200 ns and then I can see that there are exactly 200 ns between each sample. If the glitch happened somewhere in that period of time, you won't see it.
The above is the reason why we today have scopes that cost a much as a new truck or tractor. High performance scopes have many more updates per second and they have advanced display modes that work by doing statistical analysis of the data and then provide results on screen, since it's more or less impossible to actually update the screen as quickly as the scope captures data. They still have some dead time, which is basically inherent in digital scopes, but the effect is greatly reduced.
This all leaves us with the question of what you want to use the scope for. If you need to catch some elusive, short glitches, then this is not a scope for you. If you don't, then I don't know what to say to you except that you'll need to help us help you.
I myself didn't have any problems with dead time of this scope so far, but on the other hand, I wasn't catching any strange glitches. My main uses of the scope are to debug digital logic. The FFT feature proved itself very useful for tuning radio scales as well.