Author Topic: Waveforms/second in Siglent and other scopes -- limits due to capture handling  (Read 1817 times)

0 Members and 3 Guests are viewing this topic.

Offline kcbrownTopic starter

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
I've been asked to start a new thread on the topic, since this originated in the Magnova scope thread: https://www.eevblog.com/forum/testgear/magnova-oscilloscope/?all

Not really.  Even with those options, Siglent's DSOs don't behave the way I described as regards a trigger mechanism decoupled from the memory depth.

What you are explaining is exactly explanation of search /"whateverscan " as implemented on some scopes.

Which scopes?  Certainly not Siglent, unless it's changed since the 2000X+ series.

On the Siglent, each history item is a complete capture.  There is no overlap between them in terms of time, at least that I've ever seen.  If it worked the way I described (for those who haven't seen it, I described it here: https://www.eevblog.com/forum/testgear/magnova-oscilloscope/msg5503027/#msg5503027), then there would be overlap between the history items, and furthermore it would be possible to zoom out within a history item such that other trigger events would become visible even if the original time range covered by the history event's capture was smaller than the total size of memory.  Moreover, one could then zoom way in while the scope is running, see the waveforms updating on the screen in the way one would normally expect (with persistence and everything), stop the scope, and then examine the history items, replay them, etc., just as we can do so now, but with the added advantage of being able to zoom way out from within each history item to see what surrounded it.  There would be no need for a "search", because the trigger events would have already been recorded and become part of the history.

In essence, the entirety of memory becomes your capture buffer, and trigger events are then just pointers to locations within it, and history items are merely trigger events as well.

No scope I know of implements things in quite that fashion.  History items, in particular, seem to be coupled to some predefined time length (whether the result of the timebase selection, or the predefined capture buffer size divided by the sample rate).  The way you tell is that if it implements it as I describe, then the time distance between history items can be smaller than the time width of the screen as it was when the trigger fired.  I know of no scope for which that's true.


Quote
But you don't seem to understand how it works. For instance, on Siglent touch scopes, triggering is decoupled from memory capture. It runs at full speed, on incoming buffer and pretty much simply timestamps where the trigger happened while scope is endlessly filling circular buffer. Trigger event is simply pointer to memory location in a way, and a mark that scope should process data.

Then why can't you zoom out from within any history item and see other trigger events from within that zoomed out view of the history item?  At least, I've never been able to do that on any of the scopes I have (including my 2104X+).

Quote
It is not triggering that results in rettriger blind time. It is processing of data.

That presumes that processing of data can't happen in parallel with the triggering mechanism.  I'm skeptical.  The main issue I see with this is parallel memory access.

If I'm not mistaken, the trigger re-arm time is always longer than the amount of time represented by the capture buffer.  And if that's so, then that proves my point.  What I describe decouples things such that the trigger re-arm time is essentially a constant, just long enough for the scope to reset the hardware associated with the triggering mechanism (e.g., registers and other things that are used in the FPGA logic), record the event time and other required items associated with the trigger event, and to, if necessary, resume capturing (presuming that capturing isn't independent of trigger re-arm).

Quote
In fact, the frequency counter uses parts of edge trigger mechanism to count frequency...
And that works very fast .....
But it only counts and that is fast to do.

True enough.  I'll have to play with it with very long capture lengths to see how it behaves and how often it updates under those conditions.

« Last Edit: May 23, 2024, 06:00:07 am by kcbrown »
 

Offline kcbrownTopic starter

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
In fact, the frequency counter uses parts of edge trigger mechanism to count frequency...
And that works very fast .....
But it only counts and that is fast to do.

True enough.  I'll have to play with it with very long capture lengths to see how it behaves and how often it updates under those conditions.

Yep, that works just as you described, even with very long captures.  That proves that the approach I described can potentially work.

It occurs to me that the rate at which the trigger point is recorded doesn't have to keep up with the rate at which the trigger fires.  When necessary, the trigger points can be "backfilled", with the history mechanism built from recorded and backfilled trigger points.  The trigger point recording just has to fire often enough to keep the display processing mechanism happy and reasonably accurate (for things like glitch display and such).

Now, I should note that you obviously want the option of limiting your capture size, not for the purpose of determining how long the trigger delay is (that's a separate setting), but rather so that if the delay between trigger firings is longer than what you're interested in, then you can ensure that the capture memory isn't used up needlessly by waveform data you'd have no interest in.  So you'll want to be able to define the maximum width of a capture.  To make sense, the width of a capture when defined like this would have to actually be some integer fraction of total capture memory.

What would the history look like if you use that mechanism?  No different than it would if you used the entirety of memory for a capture, save for one thing: the degree to which you could "zoom out" from a given history event trigger point would be limited by the capture size.  You could still have multiple trigger events per capture, and thus multiple history events per capture, and the history mechanism would still simply switch you between views of a given capture.  The difference is that instead of a single large capture in which all history events are contained, you'd have multiple captures in which those events are contained.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27357
  • Country: nl
    • NCT Developments
One thing you need to keep in mind is that history mode is often of limited use. Doing analysis, decoding, search through a bunch of acquisitions in the history / segmented buffer is often harder or impossible compared to having a single, long capture. Abilities differ vastly between oscilloscopes and there is little functional overlap amongst manufacturers. For example: Keysight is the only manufacturer (I know of) which supports showing a decode table for all segments in the memory (with the ability to hop from one segment to the other based on selecting a row in the decode table). GW Instek OTOH supports doing statistical analysis / measurements over segments in memory. In most cases the segments are treated as seperate, non related acquisitions which reduces the usefullness. You coud get a lot more information if you can do analysis / measurements which spans all or a selection of segments.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online TomKatt

  • Frequent Contributor
  • **
  • Posts: 503
  • Country: us
One thing you need to keep in mind is that history mode is often of limited use. Doing analysis, decoding, search through a bunch of acquisitions in the history / segmented buffer is often harder or impossible compared to having a single, long capture. Abilities differ vastly between oscilloscopes and there is little functional overlap amongst manufacturers. For example: Keysight is the only manufacturer (I know of) which supports showing a decode table for all segments in the memory (with the ability to hop from one segment to the other based on selecting a row in the decode table). GW Instek OTOH supports doing statistical analysis / measurements over segments in memory. In most cases the segments are treated as seperate, non related acquisitions which reduces the usefullness. You coud get a lot more information if you can do analysis / measurements which spans all or a selection of segments.
This is one reason I think I’d really like to try a Pico scope….  Seems like using the pc for some kind of buffer / storage would allow that kind of flexibility.  There’s probably a limit to how quickly you can go for the timebase, but still…
Several Species of Small Furry Animals Gathered Together in a Cave and Grooving with a PICt
 

Offline kcbrownTopic starter

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
One thing you need to keep in mind is that history mode is often of limited use. Doing analysis, decoding, search through a bunch of acquisitions in the history / segmented buffer is often harder or impossible compared to having a single, long capture.

That seems to argue in favor of the approach I described, wherein history elements are mere references to trigger locations within captures and not to captures themselves.  What I describe makes history elements a many-to-one mapping onto captures, such that you can have many history elements within a single capture.  What you're talking about here is with respect to captures, and the history mechanism in the case you describe is merely a means of reviewing those captures.  One reason history mode is of limited use is that in all the implementations I've seen, it's mapped one-to-one with captures.  But with a many-to-one mapping, the number of captures you have can be as little as one while the number of history elements will be as many as there are trigger events in the capture.  Multiple captures with that kind of architecture would be useful only if you explicitly need uncaptured dead time between trigger events, which can be the case if the time between events is much larger than the amount of time of interest, or so large that a single capture would exceed the memory capacity of the scope.

Regardless, I agree, you really do want the ability to perform analysis on all selected or stored captures, not just the one being viewed, as you mention.

 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 6992
  • Country: hr
One thing you need to keep in mind is that history mode is often of limited use...

To you buddy. To you.
People don't use scopes only for decoding. Point in fact, many don't use them for decoding at all.

Picoscope can decode across the previous triggers and segments. Also can do DeepMeasure over previous triggers.
Siglent can do statistical analysis across History. And also can search across the History for various analog parameters.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 6992
  • Country: hr

That seems to argue in favor of the approach I described, wherein history elements are mere references to trigger locations within captures and not to captures themselves.  What I describe makes history elements a many-to-one mapping onto captures, such that you can have many history elements within a single capture.  What you're talking about here is with respect to captures, and the history mechanism in the case you describe is merely a means of reviewing those captures.  One reason history mode is of limited use is that in all the implementations I've seen, it's mapped one-to-one with captures.  But with a many-to-one mapping, the number of captures you have can be as little as one while the number of history elements will be as many as there are trigger events in the capture.  Multiple captures with that kind of architecture would be useful only if you explicitly need uncaptured dead time between trigger events, which can be the case if the time between events is much larger than the amount of time of interest, or so large that a single capture would exceed the memory capacity of the scope.

Regardless, I agree, you really do want the ability to perform analysis on all selected or stored captures, not just the one being viewed, as you mention.

You are talking all the time about single long acquisition that happens in circular, always rewriting buffer (basically FIFO), and then a triggering/tagging engine that keeps a list to "trigger equivalent" places in that buffer.
Ok fine, I get that.
What is the purpose?

If I have such timebase that I have 20 periods on the screen the scope will internally have 20 trigger events just for that single screen. Why? I already have them on screen..

What exactly we show on the screen? If I have long (100 MPt) buffer I can have, say, 20000 "trigger equivalent" points inside that buffer. What do I render for screen? Where do I start? What do I show?

What do you do with measurements? On what data you do measurements? How do you handle that?

Whole point of triggering is to wait for an event in order to ignore irrelevant data or synchronizing waveform display with signal in a way to ensure stable repetitive waveform.

This way I can capture for hours to get 10 events I need.

With your architecture I cannot EVER have any history of events that are older than what scope can capture in buffer at current sample rate.... Your proposed scope always keep rewriting anything older than certain short time.
If you have 5GS/s scope with 250MPts buffer you will never, at any time, have more than 50ms worth of historic data from the moment you stopped the scope.

You always have only what is equivalent to single long capture. And what you call "trigger" is simply a search function that finds every "trigger equivalent" place in that capture.
 
The following users thanked this post: rf-loop, Performa01, egonotto

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4677
  • Country: au
    • send complaints here
If I have such timebase that I have 20 periods on the screen the scope will internally have 20 trigger events just for that single screen. Why? I already have them on screen..

What exactly we show on the screen? If I have long (100 MPt) buffer I can have, say, 20000 "trigger equivalent" points inside that buffer. What do I render for screen? Where do I start? What do I show?
Ends up being like Lecroy wavescan or other brands "search". Even the baby Siglents have that sort of capability these days.
 
The following users thanked this post: rf-loop, Performa01, tautech, 2N3055, Martin72

Offline kcbrownTopic starter

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
You are talking all the time about single long acquisition that happens in circular, always rewriting buffer (basically FIFO), and then a triggering/tagging engine that keeps a list to "trigger equivalent" places in that buffer.
Ok fine, I get that.
What is the purpose?

The purpose of decoupling the trigger events from the buffer like that is performance, flexibility, and (arguably) ease of use.

Look at what happens right now.  Right now, the scope operates roughly this way:
  • It continuously records samples into the capture buffer, where the width of the capture buffer is at a minimum the points width of the screen (time width of the screen multiplied by sample rate), until a trigger event occurs
  • When a trigger event occurs, it continues to record samples until it reaches the capture width
  • Once the capture width is reached, it processes the data, waits until some amount of processing is complete, and then resumes acquisition in a new capture buffer.

But with what I'm proposing, data is acquired continuously (unless the user has specified that the capture memory is to be split into segments for the purpose of not capturing too much "dead time") until the scope is stopped, and all processing happens in parallel with that.  The trigger exists for the purpose of notifying external hardware of a trigger event and for aligning waveform display operations.

I was going to write this up separately, but in light of the above, I think it's a good response to some of the points you raised:

Thinking about this further, it occurs to me that with a digital trigger system, the only real reason for performing any sort of real-time recording of the locations of trigger events that occur within a capture, aside from the one from which the beginning of the capture is defined (the result of the capture size and the location of the trigger point within it, typically halfway towards the end of the capture), might be for the purpose of updating statistics and updating the display, for the "trigger out" signal, and for optimization of after-the-fact processing.  But even for those things, they don't need to be recorded, they only need to be processed (or, in the case of the trigger out signal, propagated).  The display processing, for instance, has to build what amounts to a 2D histogram at a rate of at most 30 times per second.  Other statistics arguably need to be updated at only that rate as well (if that -- a third of that rate would be just fine), since ultimately their purpose is to be displayed to the user.

But the implication of that is clear: save for notification of external hardware, the trigger mechanism exists for the purpose of causing the scope to do something only when the amount of time since the last trigger event exceeds the display refresh period.  Everything else can be determined through after-the-fact processing.

This makes the "waveforms per second" metric, when the architecture takes advantage of the above, almost meaningless.  Trigger events could be recorded anyway, but doing so would be solely for the purpose of optimization, and the recording mechanism would in principle need to record at most only one trigger event per time width of the display -- everything else could be derived after the fact if necessary.  "Waveforms per second" then becomes a measure of how many trigger events can be processed per second for display purposes.

Oddly, though, this argues in favor of many of the characteristics of current approaches, more or less, with one caveat: current approaches don't allow for a fast update rate of the display when the display is showing a relatively short period of time when the capture buffer is far larger.  They force you to choose between the two, between a fast display update rate and a large capture buffer.  And that is one of the primary advantages of the approach I'm speaking of here.

To illustrate, suppose your capture buffer allows for a capture of one second's worth of data, but your display is showing one millisecond of time, and the waveform has trigger events within it every, say, microsecond.  The approach taken by scopes currently would result in the display updating once per second.  The approach I'm proposing here would allow for the display to update as often as 30 times a second (or even 60, if the processing can support it) despite the fact that the capture buffer is 1 second in length.  The display would simply show an aggregate of 1 millisecond frames within the larger capture (with the usual intensity grading and all that).  How does this differ from simply setting the time width of a capture to 1 millisecond and letting the history store the individual captures that result?  Simple: it allows you to, with the scope stopped, zoom out from the 1 millisecond view all the way to the largest 1 second view, seamlessly.   It combines the advantages of a 1 millisecond capture and a 1 second capture.  It also makes "history" something that can be shown relative to the current display time width in the same capture.  To illustrate what I mean by that, suppose you've stopped the scope and are viewing one millisecond's worth of time.  Flipping to the next history event would then move you to the next millisecond's worth of data in the capture, more or less (this is so because there's a trigger event every microsecond, so there would be 1000 trigger events within the next millisecond's worth of data, one of which will be exactly 1 millisecond after the trigger event that defines the history event you had just been viewing).  Now let's say you zoom in so that the time on the screen is 100 microseconds.  Flip to the next history event gets you to the next 100 microseconds' worth of data in the capture (or, optionally, to the next trigger location even if it's within the current view, which could be more than 100 microseconds away from the current trigger point or even, depending on preferences, present in the current view, thus shifting your view by one microsecond).  Again, this is so because there's a trigger event every microsecond.  History, here, simply becomes a way of paging through the data, with one difference from scrolling through the capture: the history viewing mechanism is relative to trigger events, not just time.  This is different from simply scrolling through the waveform (which you can also do) because it's oriented around trigger events.

To illustrate the difference, suppose that within the above capture there's a section where no trigger occurred for, say, 10 milliseconds.  In that case, flipping from the last history item that immediately precedes the gap to the next history item would take you to the first trigger event after that gap, while scrolling through the waveform obviously would force you to scroll past the gap manually to get to the next trigger event.




Quote
If I have such timebase that I have 20 periods on the screen the scope will internally have 20 trigger events just for that single screen. Why? I already have them on screen..

You might have them on the screen but that doesn't guarantee that they're obvious to you just by looking at the waveform.  That depends on the trigger parameters.  Even so, something indicating the trigger points is something you'd want control over.

Quote
What exactly we show on the screen? If I have long (100 MPt) buffer I can have, say, 20000 "trigger equivalent" points inside that buffer. What do I render for screen? Where do I start? What do I show?

How much of the buffer are you showing on the screen?  All of it?  If all of it, then obviously you could only show a subset of the trigger points.  How much you can show, and how it would be presented, depends on the view you're showing of the capture.


Quote
What do you do with measurements? On what data you do measurements? How do you handle that?

It depends a lot on whether the scope is stopped or is running.  A running scope imposes far more constraints on the measurements that can be performed than a stopped scope.  A stopped scope can update the measurements on the basis of all recorded points, while a running scope almost certainly requires decimation of the points.


Quote
Whole point of triggering is to wait for an event in order to ignore irrelevant data or synchronizing waveform display with signal in a way to ensure stable repetitive waveform.

I don't see how my proposed approach conflicts with that.

Quote
This way I can capture for hours to get 10 events I need.

That remains the case even with the approach I mentioned, because: "Now, I should note that you obviously want the option of limiting your capture size, not for the purpose of determining how long the trigger delay is (that's a separate setting), but rather so that if the delay between trigger firings is longer than what you're interested in, then you can ensure that the capture memory isn't used up needlessly by waveform data you'd have no interest in.  So you'll want to be able to define the maximum width of a capture.  To make sense, the width of a capture when defined like this would have to actually be some integer fraction of total capture memory."

But even that might not be sufficiently flexible.  This is so because specifying the capture size in that way means that you're artificially limiting the length of your capture from the time of the first trigger point in the capture, while you might well need to, instead, specify the maximum amount of "dead time" after the last trigger point in the capture.  See below for an example of why you might want to do this.

Rather than breaking up memory into equal-sized segments, it may make more sense to capture sections of trigger activity, even if they're of varying sizes.


Quote
With your architecture I cannot EVER have any history of events that are older than what scope can capture in buffer at current sample rate....

That's true only if you set the capture buffer to span the entirety of memory.  But as I mentioned, you don't have to do that.  It's clearly advantageous to be able to specify capture buffers that are smaller than the entirety of memory, precisely because you want to account for the possibility of long periods between trigger events.


There are situations in which you want to be able to see, in "real time", a small but continuously updating subset of the entire capture, and you don't know in advance how long your capture needs to be in order to get everything of interest.  A serial decoding session of something that is emitting messages of arbitrary length at arbitrary times might be a good example of this.  I ran into a situation like this when I was examining the SPI bus used by a computer for fetching data from the BIOS.  I was forced to either perform a single capture of a long period of time, or to perform multiple captures where each was of a fixed width, when the nature of the data I wanted to capture was that it had bursts of arbitrary length activity with arbitrary amounts of dead time between each burst.  That presents a situation that no scope I'm aware of can deal with properly.  But the architecture I'm proposing can.

« Last Edit: June 19, 2024, 05:37:18 am by kcbrown »
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 6992
  • Country: hr
..................
There are situations in which you want to be able to see, in "real time", a small but continuously updating subset of the entire capture, and you don't know in advance how long your capture needs to be in order to get everything of interest.  A serial decoding session of something that is emitting messages of arbitrary length at arbitrary times might be a good example of this.  I ran into a situation like this when I was examining the SPI bus used by a computer for fetching data from the BIOS.  I was forced to either perform a single capture of a long period of time, or to perform multiple captures where each was of a fixed width, when the nature of the data I wanted to capture was that it had bursts of arbitrary length activity with arbitrary amounts of dead time between each burst.  That presents a situation that no scope I'm aware of can deal with properly.  But the architecture I'm proposing can.


It is hard for me to understand how you don't see contradictions in your own explanations.

Let me try to explain through practical examples.


1. You have bursts of data 10-35 µs in length coming in every 100ms to 2 sec. You have 100 Mpts scope memory. Scope is sampling at 1GS/s. Make note that per your request, you have no knowledge of data beforehand. How many packets of data you can have in memory at any time?
2. You have bursts of data 10-35 ms in length coming in every 100ms to 2 sec. You have 100 Mpts scope memory. Scope is sampling at 1GS/s. Make note that per your request, you have no knowledge of data beforehand. How many packets of data you can have in memory at any time?

3. You have bursts of data 10-35 µs in length coming in every 100ms to 20 sec. You have 100 Mpts scope memory. Scope is sampling at 1GS/s. You setup scope and let it run. You go to lunch.
Make note that per your request, you have no knowledge of data beforehand. How many packets of data you can find in memory at that time?
4. You have bursts of data 10-35 ms in length coming in every 100ms to 20 sec. You have 100 Mpts scope memory. Scope is sampling at 1GS/s. You setup scope and let it run. You go to lunch.
Make note that per your request, you have no knowledge of data beforehand. How many packets of data you can find in memory at that time?
 
The following users thanked this post: egonotto

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4677
  • Country: au
    • send complaints here
That presents a situation that no scope I'm aware of can deal with properly.  But the architecture I'm proposing can.
Then you'll need to explain it better, instead of a wall of text describing things it will solve but not how it will do that in a new way.

There are already scopes where you can adjust the memory depth of the segments (i.e. change the number of segments/history pages while keeping the total memory use maximised) from 1 to some arbitrary high number.

Maintaining a continuous recording (a la digitiser) falls apart because few scopes have human scales of recording in memory at the full sample rate. Scopes are triggered for a reason. When sample memory is 10x higher than sample rate then it might appear as a feature.
 

Offline kcbrownTopic starter

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
..................
There are situations in which you want to be able to see, in "real time", a small but continuously updating subset of the entire capture, and you don't know in advance how long your capture needs to be in order to get everything of interest.  A serial decoding session of something that is emitting messages of arbitrary length at arbitrary times might be a good example of this.  I ran into a situation like this when I was examining the SPI bus used by a computer for fetching data from the BIOS.  I was forced to either perform a single capture of a long period of time, or to perform multiple captures where each was of a fixed width, when the nature of the data I wanted to capture was that it had bursts of arbitrary length activity with arbitrary amounts of dead time between each burst.  That presents a situation that no scope I'm aware of can deal with properly.  But the architecture I'm proposing can.


It is hard for me to understand how you don't see contradictions in your own explanations.

Let me try to explain through practical examples.


1. You have bursts of data 10-35 µs in length coming in every 100ms to 2 sec. You have 100 Mpts scope memory. Scope is sampling at 1GS/s. Make note that per your request, you have no knowledge of data beforehand. How many packets of data you can have in memory at any time?

100 Mpts of scope memory at 1GS/s gets you 100 ms worth of memory at that sample rate.  If you have no options save for performing a single capture then, with the bursts coming in every 100ms to 2 seconds, clearly you'd only get one burst.

But I'm not arguing that the architecture must use the entirety of memory for a single capture.  As I said previously:

That remains the case even with the approach I mentioned, because: "Now, I should note that you obviously want the option of limiting your capture size, not for the purpose of determining how long the trigger delay is (that's a separate setting), but rather so that if the delay between trigger firings is longer than what you're interested in, then you can ensure that the capture memory isn't used up needlessly by waveform data you'd have no interest in.  So you'll want to be able to define the maximum width of a capture.  To make sense, the width of a capture when defined like this would have to actually be some integer fraction of total capture memory."

But even that might not be sufficiently flexible.  This is so because specifying the capture size in that way means that you're artificially limiting the length of your capture from the time of the first trigger point in the capture, while you might well need to, instead, specify the maximum amount of "dead time" after the last trigger point in the capture.  See below for an example of why you might want to do this.

Rather than breaking up memory into equal-sized segments, it may make more sense to capture sections of trigger activity, even if they're of varying sizes.

So let's suppose that the mechanism I describe above is in play.  How many bursts you can capture depends on the capture termination conditions and the signal characteristics.  Let's suppose that the data is an active high signal with a maximum high time of 1 us, so just to keep things safe you define a capture termination condition of 10 us after the last trigger (meaning: if 10 us elapses without the trigger firing, the capture stops and the scope prepares to gather a new capture in the remaining memory).  You also define the maximum amount of capture prior to the capture's trigger to be 10 us, using whatever mechanisms are appropriate for that.  So the capture will contain 20 us of padding plus whatever the length of the burst is.  Now each capture ranges between 30 us and 55 us in length.  With 100 ms of capture memory, that amounts to a minimum of 1818 captures, and (if all bursts happen to be of 10 us duration) a maximum of 3333.


Quote
2. You have bursts of data 10-35 ms in length coming in every 100ms to 2 sec. You have 100 Mpts scope memory. Scope is sampling at 1GS/s. Make note that per your request, you have no knowledge of data beforehand. How many packets of data you can have in memory at any time?

Now the bursts are 3 orders of magnitude longer.  Let's suppose that the maximum high time is now 1 ms.  Knowing that you have limited memory, you have to compromise on your capture start and termination conditions, so let's suppose you give it 5ms before and 5ms after, for 10 ms worth of padding.  If you can't change your sample rate then that'll give you captures of a duration somewhere between 20 ms and 45 ms, so now you get somewhere between 2 and 5 captures.


Quote
3. You have bursts of data 10-35 µs in length coming in every 100ms to 20 sec. You have 100 Mpts scope memory. Scope is sampling at 1GS/s. You setup scope and let it run. You go to lunch.
Make note that per your request, you have no knowledge of data beforehand. How many packets of data you can find in memory at that time?

The answer here is the same as the answer to your first question.


Quote
4. You have bursts of data 10-35 ms in length coming in every 100ms to 20 sec. You have 100 Mpts scope memory. Scope is sampling at 1GS/s. You setup scope and let it run. You go to lunch.
Make note that per your request, you have no knowledge of data beforehand. How many packets of data you can find in memory at that time?

The answer here is the same as the answer to your second question.

« Last Edit: June 20, 2024, 09:39:39 pm by kcbrown »
 

Offline kcbrownTopic starter

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
That presents a situation that no scope I'm aware of can deal with properly.  But the architecture I'm proposing can.
Then you'll need to explain it better, instead of a wall of text describing things it will solve but not how it will do that in a new way.

Perhaps my last reply to 2N3055 will help clarify things somewhat.

Quote
There are already scopes where you can adjust the memory depth of the segments (i.e. change the number of segments/history pages while keeping the total memory use maximised) from 1 to some arbitrary high number.

Yes, there are.  But all of them define the screen refresh period (when the trigger fires often enough, of course) on the basis of the capture size, not the displayed time period.  Some scopes (like most Siglent scopes, save perhaps for the most recent models) define those two things to be the same.


Quote
Maintaining a continuous recording (a la digitiser) falls apart because few scopes have human scales of recording in memory at the full sample rate. Scopes are triggered for a reason. When sample memory is 10x higher than sample rate then it might appear as a feature.

That may be.  But I'm not arguing that we should dispense with segments.  I'm arguing that we should dispense with the 1:1 mapping between captures and trigger events, that the display update rate should be defined by the time delay between trigger firings or optionally the time width of the display (whichever is longer, if the time width of the display is considered) even if the capture width is longer than both.

Perhaps another way of saying it is: the waveform update rate should be independent of the capture size.


Maybe I can illustrate my point with a question: suppose you're looking at an SPI bus signal, you've got the display zoomed in so that it's showing a single decoded value, and your trigger is set up so that it fires for every value.  How often will your display refresh to show a new value, and how many such values will you acquire in a single capture?  What will happen to the waveform data between displayed values?

What you want to see in real time and what you want to capture are not necessarily the same thing.

« Last Edit: June 20, 2024, 10:16:39 pm by kcbrown »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4677
  • Country: au
    • send complaints here
There are already scopes where you can adjust the memory depth of the segments (i.e. change the number of segments/history pages while keeping the total memory use maximised) from 1 to some arbitrary high number.
Yes, there are.  But all of them define the screen refresh period (when the trigger fires often enough, of course) on the basis of the capture size, not the displayed time period.  Some scopes (like most Siglent scopes, save perhaps for the most recent models) define those two things to be the same.
Perhaps start with getting the terminology correct. Displays/screens refresh at their own rate (generally some video standard like 24/25/30/50/60Hz) and is basically disconnected entirely from triggering and waveform memory.

Maintaining a continuous recording (a la digitiser) falls apart because few scopes have human scales of recording in memory at the full sample rate. Scopes are triggered for a reason. When sample memory is 10x higher than sample rate then it might appear as a feature.
That may be.  But I'm not arguing that we should dispense with segments.  I'm arguing that we should dispense with the 1:1 mapping between captures and trigger events, that the display update rate should be defined by the time delay between trigger firings or optionally the time width of the display (whichever is longer, if the time width of the display is considered) even if the capture width is longer than both.
Based on your new explanation above this is all getting even more inconsistent:
So let's suppose that the mechanism I describe above is in play.  How many bursts you can capture depends on the capture termination conditions and the signal characteristics.  Let's suppose that the data is an active high signal with a maximum high time of 1 us, so just to keep things safe you define a capture termination condition of 10 us after the last trigger (meaning: if 10 us elapses without the trigger firing, the capture stops and the scope prepares to gather a new capture in the remaining memory).  You also define the maximum amount of capture prior to the capture's trigger to be 10 us, using whatever mechanisms are appropriate for that.  So the capture will contain 20 us of padding plus whatever the length of the burst is.  Now each capture ranges between 30 us and 55 us in length.  With 100 ms of capture memory, that amounts to a minimum of 1818 captures, and (if all bursts happen to be of 10 us duration) a maximum of 3333.
So you say you want the segments independent of triggers, yet they should be only from triggers + some additional padding before ... and after some definition of "idle".

That padding might be some novel feature but it sounds rather complex to implement and produce a useful UI for that isn't confusing.

Why not just have your segments be larger than your expected largest packet? and position the trigger within that to have pre and post padding? We have this right now and you're asking for some minor improvement over that, at great complexity.

Perhaps another way of saying it is: the waveform update rate should be independent of the capture size.
Technically/literally/mathematically/physically impossible.

Maybe I can illustrate my point with a question: suppose you're looking at an SPI bus signal, you've got the display zoomed in so that it's showing a single decoded value, and your trigger is set up so that it fires for every value.  How often will your display refresh to show a new value, and how many decoded values will you acquire in a single capture?
That depends on so many variables its impossible to answer (many of those are hidden or stochastic! and can only be determined by testing the specific application).

As far as I can tell you're trying to extend the zoom out nonsense with a new dimension of:
"why cant my scope do both at the same time"
Well, that's because it would need to double/duplicate various parts of the system for this imagined need. Just use two scopes if it's that important to have both high waveform update rate, and a continuous/wide acquisition of the same events.
 

Offline kcbrownTopic starter

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
Perhaps start with getting the terminology correct. Displays/screens refresh at their own rate (generally some video standard like 24/25/30/50/60Hz) and is basically disconnected entirely from triggering and waveform memory.

Apologies.  I meant display update rate, i.e. the rate at which what the display is showing is updated.

Quote
Maintaining a continuous recording (a la digitiser) falls apart because few scopes have human scales of recording in memory at the full sample rate. Scopes are triggered for a reason. When sample memory is 10x higher than sample rate then it might appear as a feature.
That may be.  But I'm not arguing that we should dispense with segments.  I'm arguing that we should dispense with the 1:1 mapping between captures and trigger events, that the display update rate should be defined by the time delay between trigger firings or optionally the time width of the display (whichever is longer, if the time width of the display is considered) even if the capture width is longer than both.
Based on your new explanation above this is all getting even more inconsistent:
So let's suppose that the mechanism I describe above is in play.  How many bursts you can capture depends on the capture termination conditions and the signal characteristics.  Let's suppose that the data is an active high signal with a maximum high time of 1 us, so just to keep things safe you define a capture termination condition of 10 us after the last trigger (meaning: if 10 us elapses without the trigger firing, the capture stops and the scope prepares to gather a new capture in the remaining memory).  You also define the maximum amount of capture prior to the capture's trigger to be 10 us, using whatever mechanisms are appropriate for that.  So the capture will contain 20 us of padding plus whatever the length of the burst is.  Now each capture ranges between 30 us and 55 us in length.  With 100 ms of capture memory, that amounts to a minimum of 1818 captures, and (if all bursts happen to be of 10 us duration) a maximum of 3333.
So you say you want the segments independent of triggers,

I didn't say I want the segments to be independent of triggers, I said I want to remove the 1:1 mapping between the two.  Quite obviously, a segment should contain at least one trigger event and should be generated as a consequence of at least one trigger event.


Quote
yet they should be only from triggers + some additional padding before ... and after some definition of "idle".

The padding before is something we can already define in scopes currently, and usually get by default: it's all the captured points that precede the trigger that the capture is oriented around.

It's the padding afterwards that we can't currently define in the way I described.  Right now that "padding" is defined relative to the first trigger event in the capture, and what I'm suggesting is that it be optionally defined relative to the last trigger event in the capture.


Quote
That padding might be some novel feature but it sounds rather complex to implement and produce a useful UI for that isn't confusing.

Complex to implement?  Perhaps.  It's hard to see how it would be terribly complicated.  It can be as simple as "stop the current capture after X amount of time since the last trigger was seen".

It does mean that captures would then be variable length, whereas right now their length is predefined.  How much of an effect would that have on current implementations?  I simply can't say.


Quote
Why not just have your segments be larger than your expected largest packet? and position the trigger within that to have pre and post padding? We have this right now and you're asking for some minor improvement over that, at great complexity.

Because that presumes I know what my largest expected packet will be.  That's not a given, at all, and my original SPI bus example should make that plain.

Moreover, it forces me to sacrifice capture memory, because it forces the capture length of every capture to be the maximum expected length.  And for what?  What benefit do I get from capture memory that contains waveform points that I have no interest in?


Quote
Perhaps another way of saying it is: the waveform update rate should be independent of the capture size.
Technically/literally/mathematically/physically impossible.

Really?

If my display width is 1 millisecond, my capture length is 100 milliseconds, and my trigger firing rate is every microsecond, why can't I update the display every 17 milliseconds by shifting the display's time position within the capture by 17 milliseconds (there's clearly no need to update it every millisecond.  17 milliseconds corresponds to 60 Hz), and have the usual intensity grading on the basis of the prior 16 milliseconds worth of waveform?   What's technically/literally/mathematically/physically impossible about that?

What's technically impossible about the trigger firing every time its conditions are met (after the minimal re-arm period, of course), irrespective of the capture size or the display width, and having it activate the external trigger-out line each time?   This already effectively happens for the frequency counter, so clearly that it fires doesn't automatically imply that it has to be used for everything it could be used for, right?

Quote
As far as I can tell you're trying to extend the zoom out nonsense with a new dimension of:
"why cant my scope do both at the same time"

Yes, why can't it?  That question is the fundamental basis of all improvements.  That, of course, doesn't necessarily mean that what I'm suggesting is possible (though I don't see how/why it isn't), but if it truly can't do both at the same time, then it would be helpful to know why.


Quote
Well, that's because it would need to double/duplicate various parts of the system for this imagined need.

What would need to be duplicated in order to accomplish what I describe?   The display is simply showing some part of the already-captured waveform.  That's the case even with current implementations.  The trigger is simply firing on the basis of acquired points.  That's also the case even with current implementations.  All I'm suggesting is a change in how and when the system displays the waveform, and how and when the system begins a new capture in memory.  So far, it seems the most complicated part is the notion that captures could now differ from each other in length.


Quote
Just use two scopes if it's that important to have both high waveform update rate, and a continuous/wide acquisition of the same events.

That's certainly one approach to some of it, but it doesn't address the problem of what to do when you need variable-length captures.  It doesn't address my SPI bus example.

« Last Edit: June 20, 2024, 11:33:20 pm by kcbrown »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4677
  • Country: au
    • send complaints here
I didn't say I want the segments to be independent of triggers, I said I want to remove the 1:1 mapping between the two.  Quite obviously, a segment should contain at least one trigger event and should be generated as a consequence of at least one trigger event.
Then say that plainly, not this many post wall of text. Oh wait, that's how scopes ALREADY work.

That padding might be some novel feature but it sounds rather complex to implement and produce a useful UI for that isn't confusing.
Complex to implement?  Perhaps.  It's hard to see how it would be terribly complicated.  It can be as simple as "stop the current capture after X amount of time since the last trigger was seen".
"how hard can it be?"  :-DD
Design your own scope and get back to us on that.

Why not just have your segments be larger than your expected largest packet? and position the trigger within that to have pre and post padding? We have this right now and you're asking for some minor improvement over that, at great complexity.
Because that presumes I know what my largest expected packet will be.  That's not a given, at all, and my original SPI bus example should make that plain.

Moreover, it forces me to sacrifice capture memory, because it forces the capture length of every capture to be the maximum expected length.  And for what?  What benefit do I get from capture memory that contains waveform points that I have no interest in?
Not adding layers of complexity and work for some corner case benefit. Development is not free, and either adds cost or takes away from some other area.

Perhaps another way of saying it is: the waveform update rate should be independent of the capture size.
Technically/literally/mathematically/physically impossible.
Really?

[... add specific constraints]
Capture/wavewform/display lengths define an upper limit to the waveform update rate, if you want to invent your own terminology and not explain it then don't get pissy when no-one else understands what you are trying to say. Few devices even approach the theoretical limits because they are not practically useful to be worth investing resources into.

As far as I can tell you're trying to extend the zoom out nonsense with a new dimension of:
"why cant my scope do both at the same time"
Yes, why can't it?
Because like zoom out, its some corner case with little practical value. If you think it is such a valuable tool, go out and fund its development. If you want some magical for purpose device then stop complaining and make it happen rather than asking why no-one else is giving it to you for free.

Well, that's because it would need to double/duplicate various parts of the system for this imagined need.
What would need to be duplicated in order to accomplish what I describe?   The display is simply showing some part of the already-captured waveform.  That's the case even with current implementations.  The trigger is simply firing on the basis of acquired points.  That's also the case even with current implementations.  All I'm suggesting is a change in how and when the system displays the waveform, and how and when the system begins a new capture in memory.  So far, it seems the most complicated part is the notion that captures could now differ from each other in length.
Without realising, you've asked for not one but several different things. Pretty much all of them would have some tradeoff in either requiring additional hardware or compromising some other aspect of operation. This is engineering with multiple complex interactions that you simply dismiss in your ignorance of them.

So far this appears like a lot of consumer wishes you see on forums, which boil down to:
"I want other people to spend time aggressively optimising a complex system for my narrow and unusual use case. WHY ISNT IT HAPPENING THIS IS SO SIMPLE EVEN I CAN IMAGINE IT"

I'll just throw out my usual chuckle about scopes that implement high-res/averaging mode at higher bit depths with the same capture length as the normal modes. Do I go around shouting about scopes are throwing away the memory and not using all of it, HOW DARE THEY. No, I don't do that because it has no purpose/value. These devices are made in small volumes for specialist markets, I'm not surprised they have compromises and aren't ruthlessly optimised to use 100% of everything in all situations.
 

Offline kcbrownTopic starter

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
I didn't say I want the segments to be independent of triggers, I said I want to remove the 1:1 mapping between the two.  Quite obviously, a segment should contain at least one trigger event and should be generated as a consequence of at least one trigger event.
Then say that plainly, not this many post wall of text. Oh wait, that's how scopes ALREADY work.

I did say that plainly:

That may be.  But I'm not arguing that we should dispense with segments.  I'm arguing that we should dispense with the 1:1 mapping between captures and trigger events, that the display update rate should be defined by the time delay between trigger firings or optionally the time width of the display (whichever is longer, if the time width of the display is considered) even if the capture width is longer than both.

But, apparently, not plainly enough.

As a general rule, I presume that if someone doesn't get what I mean, then I'm not saying it properly, and that applies here.  In any case, hopefully you get where I'm coming from now.

As for the claim that it's how scopes already work, if that's truly the case, then explain why it is that, for any scope I'm aware of, the trigger doesn't rearm until after the capture completes, at least for the purpose of the waveform updates and trigger out mechanism.


Quote
"how hard can it be?"  :-DD
Design your own scope and get back to us on that.

I suspected that would be coming next.   :)

I would if I could.  But note that saying "design it yourself" is not the same as saying "here's why it can't be done".  You're asserting it can't be done, and so I ask why that's so.

Now, why hasn't it been done?  That's a different question, and not one I'm asking.  I understand that these things take development work, and may well simply be covering a corner case that isn't worth the R&D to address.  On that, I can't say.  All I can say is that I've run into situations in which the mechanism I describe would be useful, because it's more flexible (near as I can tell, at any rate) than the mechanisms that are currently in use.


Quote
Capture/wavewform/display lengths define an upper limit to the waveform update rate,

Yes, they do, with current implementations.  Now why must that be the case?  Explain in detail.  Feel free to point at external documents that answer the question.

I'm not being facetious here.  If there are good engineering reasons for these limits being defined as they are, I'd like to know what they are.  Because as far as I can tell, they're defined that way in large part because the original implementation upon which digital scopes were modeled after are analog scopes for which that is true.


Quote
if you want to invent your own terminology and not explain it then don't get pissy when no-one else understands what you are trying to say.

What makes you believe I'm getting "pissy"?  I'm not annoyed or anything of the sort.  I'm simply explaining (or, at least, attempting to -- badly, it seems) an approach to the problem of acquisition and display that occurred to me, that differs from the current approach and which would (it seems, at least on the surface) retain the current capability while addressing others that no implementation I'm aware of addresses.


Because like zoom out, its some corner case with little practical value.

Zoom out is a corner case with little practical value???

Nctnico would likely disagree strongly with that.


If it's truly a corner case with little practical value, then explain why most scopes are implemented such that the capture width exceeds the display coverage, i.e. they make limited zooming out possible by default.


Quote
If you think it is such a valuable tool, go out and fund its development. If you want some magical for purpose device then stop complaining and make it happen rather than asking why no-one else is giving it to you for free.

Again, I must reiterate that I am not complaining here!  I'm simply proposing an alternate approach that on its face seems more flexible than the current approach and which, if it has substantial disadvantages, I'm not aware of them.  My lack of awareness of those disadvantages doesn't mean they're not there.  I'm putting this whole thing out there so that I can learn those disadvantages.


Quote
Without realising, you've asked for not one but several different things. Pretty much all of them would have some tradeoff in either requiring additional hardware or compromising some other aspect of operation. This is engineering with multiple complex interactions that you simply dismiss in your ignorance of them.

Curing my ignorance is exactly why I brought this whole thing up in the first place.  I tossed the idea out in order to see what's wrong with it.  So if I'm ignoring something important, then by all means please enlighten me.  I can't learn from statements that say I'm ignorant.  I can learn from statements that show with specificity what I'm ignorant of.


Quote
No, I don't do that because it has no purpose/value. These devices are made in small volumes for specialist markets, I'm not surprised they have compromises and aren't ruthlessly optimised to use 100% of everything in all situations.

I'm not surprised by that either.  That doesn't mean there isn't room for improvement.  Maybe what I suggest addresses only what amounts to a corner case or two.  Maybe it's not so limited as all that.  But I will say this: the oscilloscope is a general-purpose instrument.  It wouldn't have the plethora of capabilities it has otherwise.  Some number of its capabilities are there to address corner cases, no?

I don't mind holes being poked in what I'm suggesting here.  I encourage it.  It's why I raised it to begin with.  And if someone takes it and runs with it and produces something more capable than what we currently have, then we'll all be better off for it.  If not, then so be it.
« Last Edit: June 21, 2024, 02:07:28 am by kcbrown »
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4677
  • Country: au
    • send complaints here
As for the claim that it's how scopes already work, if that's truly the case, then explain why it is that, for any scope I'm aware of, the trigger doesn't rearm until after the capture completes, at least for the purpose of the waveform updates and trigger out mechanism.
Which is inconsistent with what you already stated:
I didn't say I want the segments to be independent of triggers, I said I want to remove the 1:1 mapping between the two.  Quite obviously, a segment should contain at least one trigger event and should be generated as a consequence of at least one trigger event.
A capture can contain more than 1 trigger event, your statement which is what the reply was based of. Of course once a capture ends there will be some gap before the next one. What use is retriggering more capture when there is already capture occurring? (that goes back to needing additional hardware if you want to follow that path). As already shown, it's relatively common to show multiple triggers within a single capture window/record.

You're talking about multiple things (unknowingly?) and trying to impossibly simplify it down.

Capture/wavewform/display lengths define an upper limit to the waveform update rate,
Yes, they do, with current implementations.  Now why must that be the case?  Explain in detail.  Feel free to point at external documents that answer the question.

I'm not being facetious here.
How is twisting well understood techniques/methods into your own ill defined and illogical thing helping? Update rate where contiguous sequences of data are painted overlaid on a screen cannot happen faster than the data is arriving, and in the context of scopes do not happen faster than the horizontal sweep time (although a pathological implementation could it would be wildly costly). How can you put more information to the screen than is arriving? There is always an upper limit.

But note that saying "design it yourself" is not the same as saying "here's why it can't be done".  You're asserting it can't be done, and so I ask why that's so.

Curing my ignorance is exactly why I brought this whole thing up in the first place.  I tossed the idea out in order to see what's wrong with it.  So if I'm ignoring something important, then by all means please enlighten me.  I can't learn from statements that say I'm ignorant.  I can learn from statements that show with specificity what I'm ignorant of.
So far you cannot even start to discuss your ideas or explain them coherently. What you are suggesting is unclear and seems to be inconsistent and changing. It's not on me to educate you to the level you desire. I've pointed out the holes in your thinking and you've apparently done nothing to go back and learn on your own.

If you want to challenge what's possible then you need to clearly lay out what you are actually trying to achieve, using the commonplace industry terms. Which you are still failing to do and then extending that into some grand "prove me wrong" bait.

Like the perpetual motion people.....
 

Offline kcbrownTopic starter

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
A capture can contain more than 1 trigger event, your statement which is what the reply was based of. Of course once a capture ends there will be some gap before the next one. What use is retriggering more capture when there is already capture occurring? (that goes back to needing additional hardware if you want to follow that path). As already shown, it's relatively common to show multiple triggers within a single capture window/record.

You're talking about multiple things (unknowingly?) and trying to impossibly simplify it down.

Perhaps so.

It may be that I lack the proper terminology for what I'm trying to explain, or that I'm badly mangling the terminology I'm using.  My apologies for that.

Maybe it's simpler if I start from scratch, define all my terms, and use those terms to describe what I have in mind.  Doing this with sufficient rigor to satisfy you is going to take some time, so stay tuned.


 

Online tautech

  • Super Contributor
  • ***
  • Posts: 28913
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
I believe this is based on your experiences with SDS2000X Plus KC ?
Not all Siglent models manage memory and captures in the same way as they do....but can.

Eg, Zoom out of a capture is not a problem for some models.
Avid Rabid Hobbyist.
Siglent Youtube channel: https://www.youtube.com/@SiglentVideo/videos
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27357
  • Country: nl
    • NCT Developments
One thing you need to keep in mind is that history mode is often of limited use...

To you buddy. To you.
You are stuck in dealing with repetitive signals while overlooking the fact that when dealing with circuits containing a microcontroller or other digitial chips, things happen in a sequence which can span many seconds. A very nifty feature of my Tektronix logic analyser is the ability to do segmented recording but contrary to a DSO, it will show the segments on a single timeline (*) instead of overlapped on top of eachother. I don't recall a DSO which can show the recorded segments on a single timeline. This would be extremely handy for dealing with sequential events. Keysight allowing to show protocol decoding results across all segments is the feature which comes closest to having segments on a single timeline that I have seen on a DSO. When using a DSO I typically use roll-mode for looking at sequential events but the samplerate becomes very low and details may be lost (which then need to be measured seperately and thus taking more time).

* The user has a choice to show the dead-time between the segments or just stick the segments together. However, cursors will still show the correct relative time.
« Last Edit: June 21, 2024, 09:55:56 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 6992
  • Country: hr
One thing you need to keep in mind is that history mode is often of limited use...

To you buddy. To you.
You are stuck in dealing with repetitive signals while overlooking the fact that when dealing with circuits containing a microcontroller or other digitial chips, things happen in a sequence which can span many seconds. A very nifty feature of my Tektronix logic analyser is the ability to do segmented recording but contrary to a DSO, it will show the segments on a single timeline (*) instead of overlapping on top. I don't recall a DSO which can show the recorded segments on a single timeline. This would be extremely handy for dealing with sequential events. Keysight allowing to show protocol decoding results across all segments is the feature which comes closest to having segments on a single timeline that I have seen on a DSO. When using a DSO I typically use roll-mode for looking at sequential events but the samplerate becomes very low and details may be lost (which then need to be measured seperately and thus taking more time).

* The user has a choice to show the dead-time between the segments or just stick the segments together. However, cursors will still show the correct relative time.

Me stuck? LOL, that's rich coming from you ...
Scope is not one show pony. It has to be useful for repetitive and non repetitive signals, slow and fast ones, etc etc...

Showing segmented captures in single timeline is just display mode. Where if you have short packets coming in slowly, you get huge "NIL to data" ratios and display that is practically useless. But that kind of display is possible with today's scopes... You show packet, than bunch of placeholder space saying "BLANK" and another packet and so on. It might even be useful as an sort of overview window.... Like the overview window in zoom..

Decoding across the segments is something I said is useful before you (I have both Picoscope and Keysight, unlike you) and not in the slightest connected with this discussion.

Truth is OP sees one thing he/she/it does and would like scope specifically tailored for that. Same as you.
So you sympathise with OP.
Except you are reasonable, for you simply a scope that has huge memory that can be fixed separate from timebase and that decodes from whole memory regardless of timebase is a winning combination.
Fact that I disagree with you as to that is the only way to achieve that goal, does not distract from the fact that what you want is not unreasonable as it is achievable with today's technology and in fact there are scopes out there that can do it.

So I really don't understand what your comment has to do with OP "new way that scope would work".
Read carefully what he proposes and you will see that it is ether not a solution to anything, not possible, or as we start asking how to deal with real life examples his answers start converging to fact that he would like a scope with unlimited memory, with segmentation and zero retrigger time, that processes 160GBytes/s in real time for screen, math measurements, masks, counter, etc etc.......

And I say to you same as I said him: take a pencil and piece of paper and start sketching HOW you would achieve that. Hint? You won't.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27357
  • Country: nl
    • NCT Developments
And I say to you same as I said him: take a pencil and piece of paper and start sketching HOW you would achieve that. Hint? You won't.
The answer is actually in your face. Look at my avatar picture. That is a USB oscilloscope protoype with 1Gpts/channel (assuming a big enough memory module is fitted) I designed about 20 years ago. Nowadays the amount of memory can easely be a multitude of that using the same concept.

Once again, I'm not trying to take anything away. Just add more features. I don't understand how more features can ever be bad.
« Last Edit: June 21, 2024, 12:55:46 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online 2N3055

  • Super Contributor
  • ***
  • Posts: 6992
  • Country: hr
And I say to you same as I said him: take a pencil and piece of paper and start sketching HOW you would achieve that. Hint? You won't.
The answer is actually in your face. Look at my avatar picture. That is a USB oscilloscope protoype with 1Gpts/channel (assuming a big enough memory module is fitted) I designed about 20 years ago. Nowadays the amount of memory can easely be a multitude of that using the same concept.

Once again, I'm not trying to take anything away from. Just add more features. I don't understand how more features can ever be bad.

I know what you designed. It has nothing to do with what OP speaks about. Nobody said your design wasn't good.

I said try to sketch what HE proposes. Not that you cannot design classic scope of your own design.
 

Offline kcbrownTopic starter

  • Frequent Contributor
  • **
  • Posts: 896
  • Country: us
I believe this is based on your experiences with SDS2000X Plus KC ?
Not all Siglent models manage memory and captures in the same way as they do....but can.

Eg, Zoom out of a capture is not a problem for some models.

Right, I realize that the latest models allow for greater flexibility in the capture definition, thus allowing a zoom out.

Hopefully my next write-up on this will bring clarity to my thinking here.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf