I didn't say I want the segments to be independent of triggers, I said I want to remove the 1:1 mapping between the two. Quite obviously, a segment should contain at least one trigger event and should be generated as a consequence of at least one trigger event.
Then say that plainly, not this many post wall of text. Oh wait, that's how scopes ALREADY work.
I
did say that plainly:
That may be. But I'm not arguing that we should dispense with segments. I'm arguing that we should dispense with the 1:1 mapping between captures and trigger events, that the display update rate should be defined by the time delay between trigger firings or optionally the time width of the display (whichever is longer, if the time width of the display is considered) even if the capture width is longer than both.
But, apparently, not plainly enough.
As a general rule, I presume that if someone doesn't get what I mean, then I'm not saying it properly, and that applies here. In any case, hopefully you get where I'm coming from now.
As for the claim that it's how scopes already work, if that's truly the case, then explain why it is that, for any scope I'm aware of, the trigger doesn't rearm until
after the
capture completes, at least for the purpose of the waveform updates and trigger out mechanism.
"how hard can it be?"
Design your own scope and get back to us on that.
I suspected that would be coming next.
I would if I could. But note that saying "design it yourself" is
not the same as saying "here's why it can't be done". You're asserting it can't be done, and so I ask why that's so.
Now, why
hasn't it been done? That's a different question, and not one I'm asking. I understand that these things take development work, and may well simply be covering a corner case that isn't worth the R&D to address. On that, I can't say. All I can say is that I've run into situations in which the mechanism I describe would be useful, because it's more flexible (near as I can tell, at any rate) than the mechanisms that are currently in use.
Capture/wavewform/display lengths define an upper limit to the waveform update rate,
Yes, they do, with current implementations. Now
why must that be the case? Explain in detail. Feel free to point at external documents that answer the question.
I'm not being facetious here. If there are good engineering reasons for these limits being defined as they are, I'd like to know what they are. Because as far as I can tell, they're defined that way in large part because the original implementation upon which digital scopes were modeled after are
analog scopes for which that
is true.
if you want to invent your own terminology and not explain it then don't get pissy when no-one else understands what you are trying to say.
What makes you believe I'm getting "pissy"? I'm not annoyed or anything of the sort. I'm simply explaining (or, at least, attempting to -- badly, it seems) an approach to the problem of acquisition and display that occurred to me, that differs from the current approach and which would (it seems, at least on the surface) retain the current capability while addressing others that no implementation I'm aware of addresses.
Because like zoom out, its some corner case with little practical value.
Zoom out is a corner case with little practical value???
Nctnico would likely disagree strongly with that.
If it's truly a corner case with little practical value, then explain why
most scopes are implemented such that the capture width
exceeds the display coverage, i.e.
they make limited zooming out possible by default.
If you think it is such a valuable tool, go out and fund its development. If you want some magical for purpose device then stop complaining and make it happen rather than asking why no-one else is giving it to you for free.
Again, I must reiterate that I am
not complaining here! I'm simply proposing an alternate approach that on its face seems more flexible than the current approach and which, if it has substantial disadvantages, I'm not aware of them. My lack of awareness of those disadvantages doesn't mean they're not there. I'm putting this whole thing out there so that I can
learn those disadvantages.
Without realising, you've asked for not one but several different things. Pretty much all of them would have some tradeoff in either requiring additional hardware or compromising some other aspect of operation. This is engineering with multiple complex interactions that you simply dismiss in your ignorance of them.
Curing my ignorance is
exactly why I brought this whole thing up in the first place. I tossed the idea out in order to see what's
wrong with it. So if I'm ignoring something important, then by all means please enlighten me. I can't learn from statements that say I'm ignorant. I
can learn from statements that show with specificity what I'm ignorant of.
No, I don't do that because it has no purpose/value. These devices are made in small volumes for specialist markets, I'm not surprised they have compromises and aren't ruthlessly optimised to use 100% of everything in all situations.
I'm not surprised by that either. That doesn't mean there isn't room for improvement. Maybe what I suggest addresses only what amounts to a corner case or two. Maybe it's not so limited as all that. But I will say this: the oscilloscope is a general-purpose instrument. It wouldn't have the plethora of capabilities it has otherwise. Some number of its capabilities are there to address corner cases, no?
I don't mind holes being poked in what I'm suggesting here. I encourage it. It's why I raised it to begin with. And if someone takes it and runs with it and produces something more capable than what we currently have, then we'll all be better off for it. If not, then so be it.