Author Topic: Simple Technique to measure Waveform Update Rates: DSOs w/either Edge Triggering  (Read 67002 times)

0 Members and 1 Guest are viewing this topic.

Offline marmadTopic starter

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
One thing about the WaveJet history mode is that you know that it has captured all the waveforms because you can run through them individually. Though you do have to assume that the time stamp is accurate (as for as I can ascertain it is very accurate).

You can do the exact same thing on the Rigol - and I assume other scopes with segmented memory as well; e.g. Agilent, Instek, etc.
 

Offline rf-loop

  • Super Contributor
  • ***
  • Posts: 4134
  • Country: fi
  • Born in Finland with DLL21 in hand
i] Perhaps it need some magic camera... normal Canon can not see them and my eyes can not see these.
(generate fast changing signal and install camera taking picture with enough long shutter time or use example very fast video camera and then count what is really displayed.  - this is perhaps good idea for Dave - take some amazing method for detect real displayed wfrm/s and  remove mystery woman "wfrm/s" pants.  However, I am afraid that not everyone will like it - I mean, some of the prominent manufacturers.)

you can take even the light speed camera, the display is refreshing with complettly different refresh rate than the data refresh in sample buffer. Take TDS700, enable DPO/InstaVu
and you will see persistently lot of events, but when you capture them with cam, you will not see 400k wfms/s (even if you can measure such value on trigger out).


Yes - (and no).

I do not mix TFT refresh time and waveform capture rate.
Also this Owon example simple proof that also it can capture new waveform much more fast than TFT update period. 

This is now too much simplified but:
If scope acquire to "virtual phosphor" all claimed captured waveforms. They are there.
Then sometimes they are transferred to TFT. This TFT image refres rate have nothing to do with waveforms captured per second.

Now, example if there 100 waveforms captured to "virtual phosphor" memory and then come TFT tranfer.. they appear visible on TFT. Example 50 times per second.  Now if I take photogtraph from this one just updated TFT screen there need be visible these 100 waveforms (if there is enough differencies between these captures so that they use different diplay pixels). Now If I keep shutter open and take example all 50 TFT frames. And if signal have enough changes I need (in theory) Find 5000 waveforms from this 1 second image. (display rsolution perhaps can not show these but every 1/50 frame need show what is collected (captured) in this refres time to "virtual phosphor".
(of course this 1/50 update rate is just imagined example). Perhaps you remember this one old test image from Hantek screen... I have it somewhere but can not find it now.

Becouse it is "displayed waveforms per second"
(Of course now there can be also brightness gradiend what add information...  just like real phosphoir work but this is other thing)

This technique to measure waveform update rate is other question.
Example if I now calculate wfrm/s table using these test results, whole table is total bullshit.
« Last Edit: May 26, 2013, 05:42:46 am by rf-loop »
EV of course. Cars with smoke exhaust pipes - go to museum. In Finland quite all electric power is made using nuclear, wind, solar and water.

Wises must compel the mad barbarians to stop their crimes against humanity. Where have the (strong)wises gone?
 

Offline jahonen

  • Super Contributor
  • ***
  • Posts: 1055
  • Country: fi
Quote
One manufacturer long time ago tell that there is max 2000 waveforms per second.  I can see barely just over 20. If there was 2000 waveforms captured per second where they are displayed. Perhaps it need some magic camera... normal Canon can not see them and my eyes can not see these.

I understand your point, but your figures are missing some information:

Tests with Air Force pilots have shown that they could identify a plane on a picture that was flashed for as little as 1/220th of a second (i.e. make a distinction between a non-airplane shape and an airplane shape) - so there is evidence to support the theory that humans can identify discrete pieces of information in something close to ~1/250th of a second. So this would imply that we could notice a glitch that appears and disappears in the space of what is currently around the fastest refresh rate of an LCD: approximately 240Hz (I don't mean in DSO LCDs yet - I just mean in consumer goods).

More importantly (given current DPO technology), according to research, we can identify ~100 levels of intensity. So the obvious reason to have intensity grading is to increase the amount of information we could perceive on the DSO screen per second by a factor of ~100.

Human vision "speed" is difficult to characterize by a simple figure. If you have a suitable "single pulse generator", you can test it by yourself by connecting a led to the generator output. It is possible to see surprisingly short flashes of LED. IIRC, when I tested this some time ago, something like few hundred nanoseconds long flash was perfectly well visible. Of course, the effect is similar than connecting a Jim Williams pulser to a low bandwidth scope, one gets just a small bump on a trace.

Regards,
Janne
 

Offline jpb

  • Super Contributor
  • ***
  • Posts: 1771
  • Country: gb
Quote
One manufacturer long time ago tell that there is max 2000 waveforms per second.  I can see barely just over 20. If there was 2000 waveforms captured per second where they are displayed. Perhaps it need some magic camera... normal Canon can not see them and my eyes can not see these.

I understand your point, but your figures are missing some information:

Tests with Air Force pilots have shown that they could identify a plane on a picture that was flashed for as little as 1/220th of a second (i.e. make a distinction between a non-airplane shape and an airplane shape) - so there is evidence to support the theory that humans can identify discrete pieces of information in something close to ~1/250th of a second. So this would imply that we could notice a glitch that appears and disappears in the space of what is currently around the fastest refresh rate of an LCD: approximately 240Hz (I don't mean in DSO LCDs yet - I just mean in consumer goods).

More importantly (given current DPO technology), according to research, we can identify ~100 levels of intensity. So the obvious reason to have intensity grading is to increase the amount of information we could perceive on the DSO screen per second by a factor of ~100.

Human vision "speed" is difficult to characterize by a simple figure. If you have a suitable "single pulse generator", you can test it by yourself by connecting a led to the generator output. It is possible to see surprisingly short flashes of LED. IIRC, when I tested this some time ago, something like few hundred nanoseconds long flash was perfectly well visible. Of course, the effect is similar than connecting a Jim Williams pulser to a low bandwidth scope, one gets just a small bump on a trace.

Regards,
Janne
If you apply such a short pulse to a LED I'd have thought that there would be an inductive effect that would mean that light from the LED was spread over a slightly longer time.
 

Offline marmadTopic starter

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Human vision "speed" is difficult to characterize by a simple figure. If you have a suitable "single pulse generator", you can test it by yourself by connecting a led to the generator output. It is possible to see surprisingly short flashes of LED. IIRC, when I tested this some time ago, something like few hundred nanoseconds long flash was perfectly well visible.

But the quickest that a trace can first appear and then completely disappear from a DSO screen is limited to the refresh rate of the LCD - it can never be smaller than that. So that is the smallest period of time that a DSO can simulate a 'flicker'.
 

Offline marmadTopic starter

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Since we've been discussing acquisition cycles and blind times, it occurred to me that I wasn't 100% sure about the precise definition of the active acquisition time (screen time or samples?). I had assumed it was whichever was longer (in time) - but I wanted to check the literature.

After re-reading the two most oft-quoted documents on the subject, this and this, it appears, strangely enough, that Agilent and Rohde & Schwarz define this critical piece of information differently!

Agilent's lit. states:
"A scope’s dead-time percentage is based on the ratio of the scope’s acquisition cycle time minus the on-screen acquisition time, all divided by the scope’s acquisition cycle time."

Rohde & Schwarz's lit. states:
"The acquisition cycle consists of an active acquisition time and a blind time period. During the active acquisition time the oscilloscope acquires the defined number of waveform samples and writes them to the acquisition memory. e.g. 100 ns (1000 Sa, 10 GSa/s). "

So Agilent considers points captured, but not displayed, as part of the blind time - but Rohde & Schwarz doesn't.

So for example, given the following settings:

1GSa/s sample rate
1k sample length (so sample time is 1us)
10ns/div. time base
10 divisions on screen (so onscreen time is 100ns)
100k wfrm/s

Agilent's lit. states:
"% DT = Scope’s dead-time percentage
 = 100 x [(1/U) – W]/(1/U)
 = 100 x (1 – UW)
 where
 U = Scope’s measured update rate
 and
 W = Display acquisition window = Timebase setting x 10"

So according to Agilent's specifications and formula, the blind time is 99%.

R&S's lit. states:
"acquisition rate = 1 / acquisition cycle time
blind time ratio = blind time / acquisition cycle time"

So according to R&S's specifications and formula, the blind time is 90%.

Strange.  But perhaps it's just a question of semantics: Agilent feels if you can't see it on the display, it's not relevant - but R&S feels that it's still data that's been captured and can be analyzed if needed?
Ha, ha... just a coincidence?

A week ago (May 24th) I wrote the above post, pointing out the fact that Agilent and R&S have different definitions of dead (blind) time - meaning that their calculations could provide different blind time percentages given the same set of data. I pulled info from the oft-quoted Agilent document, "Evaluating Oscilloscopes for Best Waveform Update Rates", published in 2011, that contains this graphic, illustrating Agilent's definition of dead time (at least their published definition up to a week ago):



Well, exactly 5 days after that post (May 29th), Agilent published an online article called, "What is waveform update rate and why does it matter?", which includes the following graphic - which appears to 'refine' their definition of dead time into two sub-sections: "Effective"dead time and "Real" dead time - effectively fixing the disparity between their calculations and R&S (if you use "Real" dead time for calculations):




You're welcome, Agilent!  Now where's my free test gear?  ;D 
« Last Edit: May 30, 2013, 11:52:33 pm by marmad »
 

Offline jpb

  • Super Contributor
  • ***
  • Posts: 1771
  • Country: gb
Presumably if you are looking for glitches it is the display window that matters.

The difference becomes significant at slower time bases where the real dead time is trivial compared to the acquisition time.

At slower time bases it doesn't matter much for glitch hunting because the total dead time is only a small percentage, but it does mean that if you're trying to estimate
the real dead time by measurements at slower time bases you need to know how much extra is captured (or buffered).

In trying to translate my waveform update rates into a dead time (real dead time) I've concluded that the WaveJet probably has a buffer of 520 points when it captures 500 points for display. This is a slightly odd number (I expected 512) but it is the number that seems to make sense of the measurements at different time bases.
 

Offline marmadTopic starter

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
Presumably if you are looking for glitches it is the display window that matters.

If by 'looking' you mean literally with your eyes, then yes. But I could capture, for example, 8128 segments of acquisition time (while having a much smaller display window) and analyze them after the fact for glitches.
 

Offline jpb

  • Super Contributor
  • ***
  • Posts: 1771
  • Country: gb
Presumably if you are looking for glitches it is the display window that matters.

If by 'looking' you mean literally with your eyes, then yes. But I could capture, for example, 8128 segments of acquisition time (while having a much smaller display window) and analyze them after the fact for glitches.

That is true of data that is saved, but with most scopes they save only data that is within the display time frame. I think there is some data that is captured in the sense of being sampled into a buffer but is not saved into longer term memory. This comes back to our previous discussion on the extra 2%. For example the WaveJet displays 1024 captures at once which you can go back through (segmented memory). But each of those individual captures contain up to 500 sample points, whilst I think that the buffer is more like 520 points. The extra 20 points are captured but you can't look at them as they are not transferred to main memory.

Given that there must be some variability in time taken to trigger and half-a-screen of pre-trigger data is needed there must be more in the buffer than is actually then saved or displayed. This extra data is captured at the sample rate regardless of how fast the processing and is, I think, why blind times rise for the slower time bases.
 

Offline marmadTopic starter

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
That is true of data that is saved, but with most scopes they save only data that is within the display time frame. I think there is some data that is captured in the sense of being sampled into a buffer but is not saved into longer term memory.

Longer term memory? Maybe you're confused a bit based on the way your Wavejet is doing things, but virtually every modern DSO work's in pretty much the same way. The acquisition time of a DSO is always either the display window (as Agilent calls it) or the sample length - whichever is longer in real-time. For example, when my sample rate is 2GSa/s, my DSO is grabbing a sample every 500ps. If I have the sample length set to 140k, it takes the DSO 70us to fill it. If my time base is set to 10us/div, the display window is 140us (100us x 14), so the DSO halves the sampling rate clock (or, like the Agilent X-Series, throws out every other sample) to match the sample length to the display window time - thus making my acquisition time 140us. But if my time base is set to 10ns, the display window is only 140ns, but the DSO is still capturing 70us - so 70us is my acquisition time. If I stop the DSO at any time - I can 'zoom' out and see (or analyze) all the samples.
« Last Edit: May 31, 2013, 02:27:14 pm by marmad »
 

Offline jpb

  • Super Contributor
  • ***
  • Posts: 1771
  • Country: gb
That is true of data that is saved, but with most scopes they save only data that is within the display time frame. I think there is some data that is captured in the sense of being sampled into a buffer but is not saved into longer term memory.

Longer term memory? Maybe you're confused a bit based on the way your Wavejet is doing things, but virtually every modern DSO work's in pretty much the same way. The acquisition time of a DSO is always either the display window (as Agilent calls it) or the sample length - whichever is longer in real-time. For example, when my sample rate is 2GSa/s, my DSO is grabbing a sample every 500ps. If I have the sample length set to 140k, it takes the DSO 70us to fill it. If my time base is set to 10us/div, the display window is 140us (100us x 14), so the DSO halves the sampling rate clock (or, like the Agilent X-Series, throws out every other sample) to match the sample length to the display window time - thus making my acquisition time 140us. But if my time base is set to 10ns, the display window is only 140ns, but the DSO is still capturing 70us - so 70us is my acquisition time. If I stop the DSO at any time - I can 'zoom' out and see (or analyze) all the samples.
Yes, the WaveJet operates differently. If I set 500k memory and a time base of say 10nS/div it will only capture the 200 points (at 2GS/s) and will display up to 1024 such captures i.e. the memory is the maximum allowed, it doesn't carry on beyond a screen full. (Or rather it does carry on but in a segmented fashion.)
So it determines the size of the pre-trigger buffer from the time base setting. You can zoom in but not zoom out.

Of course, if you're just capturing data for post analysis then the methods are equivalent - you either set the segment size directly or set it by setting the time base and maximum allowed memory.

If you're trying to see glitches on the screen, the most common way to look for them I would have thought, then you're not going to be zooming out anyway.

 

Offline marmadTopic starter

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
If you're trying to see glitches on the screen, the most common way to look for them I would have thought, then you're not going to be zooming out anyway.

Absolutely - especially if you're not necessarily expecting them - thus the reason for desiring faster waveform update rates: to have better odds of seeing them when they're intermittent or unexpected.

There really isn't much point to using large sample lengths at time base settings < ~1us/div (for a 2GSa/s DSO), unless you have a specific reason for capturing lots of extra samples after (or before) the trigger point. But of course, at time base settings greater than that, it's a whole different story because of sample rates.
 

Offline illyesgeza

  • Contributor
  • Posts: 41
How To measure the refresh rate of a DSO.
I tried on a siglent1102
select 1Gs sampling rate(on siglent is 50nS/div)
select trigger on rising edge
select one channel
so you have 40960 samples (total 0.04096 miliseconds)
you have to generate two (and only two!!!) pulses with a controlled amount of time

between them
the two pulses must not be equal in width
I genarate the first 250nS and second 500nS
while the time between them is less than the refresh time you'll see the first

pulse in the middle of the screen
when the time is equal or greater than the refresh time you'll se the second
That's why the width must be different (to make the difference between the first

and the second)
I have genarated the pulses with an Atmell micro controller runing at 48MHz clock
The narrowest pulse you can generate is 250nS
example:
move portbit,1
move portbit,0 ;first 250ns pulse _|-|_
call delay(x); //this must be variable
move portbit,1
move portbit,1
move portbit,0 ;second pulse _|--|_
//that's all
if you try this with x = 20mS you'll see the first pulse
if x > 30mS you'll se the second
So I think that the refresh rate is around 30 - 50Hz
That's for 1Gs sampling rate
for slower rates the refresh rate will be lower

 



 

Offline marmadTopic starter

  • Super Contributor
  • ***
  • Posts: 2979
  • Country: aq
    • DaysAlive
How To measure the refresh rate of a DSO.
between them
the two pulses must not be equal in width
I genarate the first 250nS and second 500nS
while the time between them is less than the refresh time you'll see the first
Yes, this is posted about earlier in this thread.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf