Author Topic: DSO Blind time  (Read 3913 times)

0 Members and 1 Guest are viewing this topic.

Offline pplaninskyTopic starter

  • Regular Contributor
  • *
  • Posts: 83
  • Country: de
DSO Blind time
« on: September 20, 2015, 02:04:42 pm »
Up till recently - I was totally unaware of what is a DSO blind time and waveform update rate are.
So, I've done some research, read papers and the forum.

But, one thing from the Rohde & Schwarz paper is puzzling:

"It may not be obvious at this point, but increasing the time base can
indeed result in a shorter blind time ratio. Unfortunately, the longer record length
results in a reduced acquisition rate and a much slower waveform update rate"

(Yes, it is not obvious and yes, if you mention something in your paper, which you consider not to be obvious, the readers might very much like some further explanations about it :)

http://cdn.rohde-schwarz.com/pws/dl_downloads/dl_application/application_notes/1er02/1ER02_1e.pdf

Now, with bigger timebases the waveform update rate is slower, like it can be 1 update/second
If I correctly understand - the blind ratio goes down, just because the blind time is X seconds and if you have timebases T1 < T2, then
X/T1 > X/T2. So, your blind time percentage goes down. This is true if the blind time X is equal for both timebases.

My first question is - I am understanding correctly what the R & S paper is saying?
And second - what's the use of the longer time base in this case if you have just 1 waveform per/sec?

Also, saying "shorter blind time ratio" - doesn't make sense to me. Shorter is not an adjective to be used with 'ratio'.
A 'ratio' can be smaller or bigger compared to something, but 'shorter' ...

I generally, understand what the people have written in this paper, but this sentence is quite ambiguous to me...
 

Offline jitter

  • Frequent Contributor
  • **
  • Posts: 809
  • Country: nl
Re: DSO Blind time
« Reply #1 on: September 20, 2015, 02:28:35 pm »
Perhaps the writer mixed up "shorter blind time" and "smaller blind time ratio".

Does it make sense if "shorter blind time" was written? And would that not imply a smaller blind time ratio?

Perhaps it's interesting to watch EEVblog #797. There is a bit on blind time in it. A 1 in a million runt pulse was not detected in a "sequence" mode with 300k waveform updates per second (thanks to a huge blind time), but it was no problem in a normal mode with a far lower update rate (10 k). Not sure if it's completely relevant to your question, though.
« Last Edit: September 20, 2015, 02:58:09 pm by jitter »
 

Offline MatthewEveritt

  • Supporter
  • ****
  • Posts: 136
  • Country: gb
Re: DSO Blind time
« Reply #2 on: September 20, 2015, 03:29:18 pm »
It's not clear, but I think what they're saying is that the blind time can increase faster than the sample length as the timebase is changed. This makes sense if the scope is doing something computationally intensive, where extra data points can make it work much harder.

Basically any calculation that's non-linear in the number of data points (FFT for example) will do this.
 

Offline jitter

  • Frequent Contributor
  • **
  • Posts: 809
  • Country: nl
Re: DSO Blind time
« Reply #3 on: September 20, 2015, 08:50:31 pm »
Just glanced through the document.

On page 4 there's the definition which discribes the acquisition cycle time being made up of active acquisition, fixed blind time and variable blind time. The latter represents the time needed for processing and is dependent on the settings like number of waveform samples and other postprocessing options.
Since the longer timebases result in slower update rates, that must mean that the acquisition cycle time is getting longer.

Just to see if I interpretate the blind time ratio the same as you, the equation again:
blind time ratio = blind time / acquisition cycle time.
This equation implicates that a shorter blind time leads to a smaller blind time ratio but also that a longer acquisition cycle time will result in a smaller ratio.

Page 16 gives a table with several sample rates at a fixed record length.
Even though a 10 GS/s and 10 ns/div timescale leads to over 1M updates/s, the blind time is 90% while on the other end, 10 MS/s and 10 us/div, (not even 10k updates/s) it's only 5%.

It looks like I understand the article the same as you. And yes, I would agree that this is pretty counter intuitive.

But then at 1 S/s and an infrequently occurring glitch, you would have no chance of capturing it in a reasonable amount of time. For that a fast update rate is needed (that's where the second equation in the R&S paper comes into play). Here's what Keysight tell you: http://www.keysight.com/main/editorial.jspx?cc=NL&lc=dut&ckey=2342706&nid=-11143.0.00&id=2342706
It's to do with chance, according to Keysight in their 4000-series scopes: at 1M waveform updates/s a 5x per second glitch can be caught with 92% probability within 5 s, whereas at 2k-3k updates/s that's < 1%.
« Last Edit: September 21, 2015, 05:08:29 am by jitter »
 

Offline tautech

  • Super Contributor
  • ***
  • Posts: 29299
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: DSO Blind time
« Reply #4 on: September 20, 2015, 09:15:21 pm »
It looks like I understand the article the same as you. And yes, I would agree that this is pretty counter intuitive.
This is often the case when technical papers are translated for worldwide release and is not uncommon.
We shouldn't place absolute interpretation on some articles, but just understand the general meaning of the article. Some authors are much better at getting a point across and further study of similar papers often makes understanding clearer, as jitter has done.
Avid Rabid Hobbyist.
Some stuff seen @ Siglent HQ cannot be shared.
 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2320
  • Country: au
Re: DSO Blind time
« Reply #5 on: September 21, 2015, 12:15:44 am »
If the blind time is relatively fixed, then obviously making the acquisition time longer will mean that a greater percentage of the time, the input signal is ending up on-screen. For example, if the acquisition time is 1ms, and the blind time is 1ms, then the oscilloscope is only "sighted" 50% of the time, and there's a 50% chance of catching any particular runt pulse. If you up the acquisition time to 10ms, but leave the number of points identical, you'll probably maintain a blind time of about 1ms -- now the oscilloscope is "sighted" 91% of the time, and there's a 91% chance of catching a runt pulse. This can also be expressed as the blind time ratio dropping from 50% to 9%.

The other way of thinking about it is this: yes, increasing timebase will reduce the acquisitions per second, but each acquisition contains many more periods of the waveform. Overall, you catch more periods of the waveform, more chances to catch that runt pulse. As the doc explains, the downside is that you have to peer into a busy screen to find the (now horizontally shrunken) runt pulse, and the scope generally feels laggier. But the bottom line is, your chance of catching any given runt pulse does technically increase.

Perhaps it's interesting to watch EEVblog #797. There is a bit on blind time in it. A 1 in a million runt pulse was not detected in a "sequence" mode with 300k waveform updates per second (thanks to a huge blind time), but it was no problem in a normal mode with a far lower update rate (10 k). Not sure if it's completely relevant to your question, though.

Noooo, that's confusing at best. That scope had an uneven distribution of blind time, which meant that it could sustain 300k waveforms per second for short bursts (more accurate to say that it can trigger on events 3.333ms apart to avoid this confusion), but it simply cannot capture 300k waveforms in one actual second. The actual number-of-waveforms-per-actual-second was clearly much less than 10k per second. So that particular feature of that scope is not for catching runt pulses; it's evidently worse than the standard mode for that particular purpose. The feature is for capturing bursty data, for example if a device blasts out bursts of 1000 SPI commands at a time with 4ms between them, the Siglent can capture every one of those just like a genuine 300k waveforms per second Keysight scope.

 

Offline jitter

  • Frequent Contributor
  • **
  • Posts: 809
  • Country: nl
Re: DSO Blind time
« Reply #6 on: September 21, 2015, 05:22:48 am »
Agreed, that's a good explanation.

Page 16 gives a table with several sample rates at a fixed record length.
Even though a 10 GS/s and 10 ns/div timescale leads to over 1M updates/s, the blind time is 90% while on the other end, 10 MS/s and 10 us/div, (not even 10k updates/s) it's only 5%.

Is this a correct way of looking at it?:
1M updates/s and a 90% blind time gets you 100,000 samples/s in which the glitch could have been captured.
10k updates/s and 5% blind time gets you only 9,500 samples/s in which the glitch could have been captured.

 

Offline rs20

  • Super Contributor
  • ***
  • Posts: 2320
  • Country: au
Re: DSO Blind time
« Reply #7 on: September 21, 2015, 05:33:29 am »
Is this a correct way of looking at it?:
1M updates/s and a 90% blind time gets you 100,000 samples/s in which the glitch could have been captured.
10k updates/s and 5% blind time gets you only 9,500 samples/s in which the glitch could have been captured.
No, not as you've written it there, I suspect you've made a typo. After all, if you have 10k waveform updates per second, and each waveform consists of, say, 14k samples, then a simple multiplication gives you 140M samples/s captured+displayed on screen, not the 9.5k samples/second you're looking at there. One waveform consists of many samples. Blind time doesn't enter this equation at all.
 

Offline rf-loop

  • Super Contributor
  • ***
  • Posts: 4133
  • Country: fi
  • Born in Finland with DLL21 in hand
Re: DSO Blind time
« Reply #8 on: September 21, 2015, 05:48:43 am »


Noooo, that's confusing at best. That scope had an uneven distribution of blind time, which meant that it could sustain 300k waveforms per second for short bursts (more accurate to say that it can trigger on events 3.333ms apart to avoid this confusion), but it simply cannot capture 300k waveforms in one actual second. The actual number-of-waveforms-per-actual-second was clearly much less than 10k per second. So that particular feature of that scope is not for catching runt pulses; it's evidently worse than the standard mode for that particular purpose. The feature is for capturing bursty data, for example if a device blasts out bursts of 1000 SPI commands at a time with 4ms between them, the Siglent can capture every one of those just like a genuine 300k waveforms per second Keysight scope.

This is interesting case and I feel this need open better some day.
When use this kind of things for test something, user need know what he is doing and user need know his tool what it can and what not.  In this particular video about Siglent SDS1kX he use fast segmented memory acquisition for very rare glich. May I say - if in my lab someone try this and he need also salary I tell him that go out through cashier and never come back.
But in this case, Dave have just take it out from box without any true knowledge and experience with scope. Just like kids play with his first Nintendo. Of course this is useful and so on but this is not at all how normal people work in any serious lab for work. I do not say this is wrong, this kind of videos are just ok and useful and so on.

If segmented memory acquisition mode is made so that it do not repeat after selected segments are acquired he did not this example at all. (Btw, I do not know why it repeat automatically. Perhaps it is good if Siglent add one selection for segmented memory acquisition: One shot or repeating mode.
Btw, ad tell it can do 300kwfm/s speed up to 1000 segments. (1024 segments is max)
(Measured maximum is 500kwfm (segment)/s if all  things are optimal. (so this is not good value for real ad where need be more conservative and better to write 300... )

Segmented acquistion is not made for maximal continuous wfm/s speed. It is more made for example this kind of situations. There is example 10ns pulse every one second. With segmented acquire you can take 1000 second record and watch all these 1000 pulses. (still IF there exist some peak between pulses, it can also catch this if trig detected. Even if it arrive after 2us after last trig. 
It is NOT for find rare clitches from continuous signal where glitch propability is low. In this particular example there was clitch ratio 1:1000000.  Who even try serously use segmented memory acquisition for this?  Now one who know what he is doing.

And Dave's video teach it well, so this video was also useful lesson for noobs.
Only thing is that in teaching video these kind of things need do with more explanations and more slowly so that watchers understand what is going on.



In SDS1000X it is just as "poor man's"  reduced verion of this (Keyshit more sophisticated version) exept that in Siglent it have quite fast trig rearm.

Quote
Segment source
Analog channels 1 and 2 (on two-channel DSO models)
+ Analog channels 3 and 4 (on four-channels DSO models)
+ Digital channels D0 to D15 (on MSO models)
+ Serial decode (on models with serial decode options)
Number of segments
1 to 2000 (5000, 6000, and 7000 Series)
1 to 1000 (3000, 4000, and 6000 X-Series)
1 to 250 (2000 X-Series)
Minimum segment size
500 points (+ Sin(x)/x reconstructed points on faster timebase settings)
Re-arm time
(minimum time between trigger events)
5000, 6000, 7000: 6 us
6000 X-Series: 7.5 us
3000 and 4000 X-Series: 1 us
2000 X-Series: 20 us
Time-tag resolution
10 ps or 6 digits (whichever is greater)

Minimum time between triggers is better than in Keyshit 2000X, 6000X, 5000, 6000, 7000
Depending settings down to over ~2us.
Keyshit have much more features in this segmented memory acquisition and much better tools for  inspect after after acquisition made. (I hope Siglent add some more features later for this. And 1us resolution in time stamps is.. just   :( )

http://www.newark.com/pdfs/techarticles/agilent/OSMA.pdf

Sides 1 and 2.
« Last Edit: September 21, 2015, 06:02:36 am by rf-loop »
EV of course. Cars with smoke exhaust pipes - go to museum. In Finland quite all electric power is made using nuclear, wind, solar and water.

Wises must compel the mad barbarians to stop their crimes against humanity. Where have the (strong)wises gone?
 

Offline jitter

  • Frequent Contributor
  • **
  • Posts: 809
  • Country: nl
Re: DSO Blind time
« Reply #9 on: September 21, 2015, 08:01:47 am »
Is this a correct way of looking at it?:
1M updates/s and a 90% blind time gets you 100,000 samples/s in which the glitch could have been captured.
10k updates/s and 5% blind time gets you only 9,500 samples/s in which the glitch could have been captured.
No, not as you've written it there, I suspect you've made a typo. After all, if you have 10k waveform updates per second, and each waveform consists of, say, 14k samples, then a simple multiplication gives you 140M samples/s captured+displayed on screen, not the 9.5k samples/second you're looking at there. One waveform consists of many samples. Blind time doesn't enter this equation at all.

Thanks for correcting me, you're right. I forgot for a moment that a single waveform consists of multiple samples, not one.


« Last Edit: September 21, 2015, 08:05:30 am by jitter »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf