No, I keep repeating scope already can do that in superior way with no redundant (overlapping)data.
Sure, it
can. But the downside is that, firstly, the waveform update rate is now limited by the time width of the capture and, secondly, you now need two independent trigger identification implementations in order to see all of the trigger events
one of which (search) only has access to decimated data and thus could easily fail to see trigger events that were actually present. That you need two completely different trigger detection implementations is, in part, why the search is a very small subset of the trigger set -- it requires that much additional effort to implement. The only
real upside to having a trigger event search is that it makes it possible for you to look for trigger events that differ in their characteristics from the trigger setup that resulted in the capture.
You capture the lot, full 100 ms of it, and use search to find all of the points that you would usually trigger on.
That's not necessarily good enough. Search only has access to decimated data. The trigger mechanism has access to the full sample rate of the scope at all times. The two are
not comparable except by chance.
Also I nowhere mentioned trigger delay. If you fit whole event in a screen, by using proper timebase for it, you can set trigger point in reference to where in capture buffer it will be. So on a trigger event you get 20ms of data, 10 before and 10 after the trigger.. It is all visible on the screen and clearly defined. No need for mental acrobatics...
Nothing prevents that from remaining the case. In the implementation I outlined, the trigger point, by its position, guarantees the minimum amount of data that will be captured before and after any given trigger point. It still has the meaning one would expect.
The difference is that the implementation I outlined ensures that the scope will save all of the trigger events that the trigger system is capable of seeing
irrespective of your timebase setting. Save for the additional memory that the pointers would need (and that could be considerable), I see no downside to this in principle.
I mentioned the trigger holdoff because right now, the trigger re-arming mechanism has an implied holdoff based on the time width of the screen, and my proposed implementation does away with that.
You have to decouple in your mind sampled data buffer and viewpoint (zoom). Once you have buffer you can zoom in and traverse anywhere in the buffer..
I
do decouple that. In the implementation I outlined, a capture is just a contiguous series of samples that are bundled into a single entity. Its beginning is defined by the left edge of the screen and the trigger point. Its end is defined by the right edge of the screen relative to the first trigger event in the series that had a time delta between it and the subsequent trigger point greater than the amount of time between the trigger point definition and the right edge of the screen. The capture is an internal representation.
For presentation purposes, the manufacturer would have a couple of options. He could arrange things so that the history always shows the same size segments as it does right now, and moving to the next history entry just has the effect of showing that subset of the capture that the screen defined at the time of capture. The only difference between the current implementation and this is that, for the implementation under discussion, the time delta between two subsequent history events could be much smaller than the screen time width as it was at the time of capture, while the current implementation imposes the screen time width as the minimum time between history events.
Or the manufacturer could (and I would argue should, given the data the system would make available), make moving to a new history frame show you the portion of the waveform on the screen relative to that trigger event in the history frame, but instead of limiting the data in the frame to that which was represented by the screen's time width at the time of capture, it would allow you to see the entire capture if you zoom out enough. Which is to say, if the capture itself occupies more time than what the screen's time width at time of capture was, then you should be able to see everything in it if you so choose, or zoom in to any part of it, as you so choose. Your position in the history list would then be dictated by which capture you're looking at and where your view of the capture is relative to a history event, and you should be able to move between history events (really, recorded trigger events) either by moving within the list or by moving within the capture, as you see fit.
Also scopes do all kinds of tricks managing the memory, both in hardware and in software, and between separate buffers, not necessarily in same memory space or even physical chips . To make it deterministic and as fast as possible, not all kind of buffoonery is realistically possible.
Oh, I'm quite well aware of that. I don't claim that what I'm proposing here is possible without faster hardware. On that, I simply can't say.
Zoom mode should be designed to be as efficient and ergonomic as possible. Maybe a bit more control of screen use.
History mode should be made to better retain useful data.
Some data should not be retained in history, maybe, user might not want to keep old data after they changed some settings, because it makes old captures not something comparable with new ones. Or it can be left to user to choose.
I completely agree.
Enhance history/segemented buffer handling analysis.
Enhance search.
Etc etc.
No need to invent arcane contraptions to fix usability problems stemming from some features not being fully or most elegantly implemented. Most logical, easiest correct way is to fix/upgrade those to full potential of architecture. Otherwise you end up with spaghetti monster that is confusing and does nothing right.
You'd certainly need to keep things simple and understandable for the user, no question about that. I don't see why that isn't possible while doing what I described.
Remember: the purpose of what I described is to make it possible to see trigger events that currently go completely unnoticed even though they were present,
without sacrificing capture length in the process. Put another way, trigger events and capture length should be
decoupled, and right now they're not.