Up till recently - I was totally unaware of what is a DSO blind time and waveform update rate are.
So, I've done some research, read papers and the forum.
But, one thing from the Rohde & Schwarz paper is puzzling:
"It may not be obvious at this point, but increasing the time base can
indeed result in a shorter blind time ratio. Unfortunately, the longer record length
results in a reduced acquisition rate and a much slower waveform update rate"
(Yes, it is not obvious and yes, if you mention something in your paper, which you consider not to be obvious, the readers might very much like some further explanations about it
http://cdn.rohde-schwarz.com/pws/dl_downloads/dl_application/application_notes/1er02/1ER02_1e.pdfNow, with bigger timebases the waveform update rate is slower, like it can be 1 update/second
If I correctly understand - the blind ratio goes down, just because the blind time is X seconds and if you have timebases T1 < T2, then
X/T1 > X/T2. So, your blind time percentage goes down. This is true if the blind time X is equal for both timebases.
My first question is - I am understanding correctly what the R & S paper is saying?
And second - what's the use of the longer time base in this case if you have just 1 waveform per/sec?
Also, saying "shorter blind time ratio" - doesn't make sense to me. Shorter is not an adjective to be used with 'ratio'.
A 'ratio' can be smaller or bigger compared to something, but 'shorter' ...
I generally, understand what the people have written in this paper, but this sentence is quite ambiguous to me...