So how about one of the proponents of peak detect describes a real case and a real signal which clearly demonstrates the need for peak detect and which I could try to replicate?
I'm not proponent of peek detect. I just shared my experience (not opinion) that it is useful sometimes, and good to have.
This is a bit like an old academic fight about GOTO statement. It was religious like dispute that completely obfuscated the fact that statement wasn't evil per se but was misused.
Same here. I fail to see your reasons to
fight against something other people find useful. What is your reason for it?
Academic purity of measurement process? Fact that Wuerstchenhund says so so it must be true?
Fact that you do only limited scope of work with oscilloscopes and in your dealings you never had need for it?
As I said before, sometimes you have to work at long timescale because that is what device does. Like I sad in example of blinking light. It's blinking in seconds scale, but is driven with kHz scale PWM, and powered with 1MHz switcher, CPU is running at 8Mhz, and there is radio module at 868 MHz.. And there were visible artefacts in lights at seemingly random intervals. It is given to you as ready made device and no support from original developer.
So what are you looking at here? What trigger do you use... How do you correlate things? How do you do it quickest and most efficiently? What is normal and abnormal signal here? Before anything you first have to make sense of what is going on, get a feel for it..
You don't need purity of measurements all the time. Sometimes it is just looking at the things...
And peak detect is actually what you would see on analog CRT scope. CRT scope would retain full vertical deflection of incoming signal, regardless of timebase.
Once the timebase was such that the trace width was larger than signal width at that scale, you didn't know anymore what was in there. Was it triangle, sine, or a whole SPI packet there? No it was just vertical line with correct P-P value. But something is there.
If you grab something with 128 Mpoints and 4 Mpoint, it will still be mapped to 800-2000 (if you're lucky) horizontal pixel resolution. So if you set timebase that one pixel width corresponds to say 1us, anything happening in that 1 us will be simple vertical line. On both of them.
And then you want to see what's in there. Yo can now use zoom, and while still capturing whole 2 seconds, zoom in to that 1 us. On short memory scope you will see garbage. On long memory scope you will see it all. Beautiful. But if signal is repetitive, you can just shorten the timebase and move around with trigger delay. Like we did for 50 years... And you will still see the same. And once scope start sampling full speed it will automatically disable peek detect and signal will be pristine.
In practical
interactive work there is no difference most of the time. Only thing you will see is that shorter memory scope will, in general, be snappier and feel better. Not a show stopper but some people will find that important. I don't find it
that important (I guess it has to do with each individual's patience) but it is
nice though.
Only real difference is when you have to capture signal in its entirety, with full detail, for further analysis, to share it with somebody, for documentation etc.
In which case you need:
1. Long memory to capture whole event at sample rate that is good enough to capture all that is interesting.
2. You don't want to reshape samples in any way, so you can apply any kind of signal transformation on mathematically correct signal. No filters, no peek detect, no high res....
And that is why long memory scopes exist. That is bread and butter of LeCroy, Picoscope etc. for instance.
Depending of what you do, you might just do interactive work (like servicing equipment) all the time. Or you might be some scientist that just sample and analyse all the time, no interactive scope work at all.
Or like me (and Nico and many others) you have maybe 80% of interactive work and 20% of the time when you need long mem scopes.
Funny thing is that you're arguing us, but both Nctnico and myself (if you care to look around the eevblog) are great proponents of long memory scopes, and warned many members about that.
To explain, I have fully loaded Keysight MSOX3104T, and Picoscope 3406D MSO with 512 MPoints, and a Picoscope 4262 (16 bit low noise instrument). They all can do things other two cannot.
But if I had to keep only one it would be Keysight. Despite small memory (in comparison), it is most useful in my everyday work. Even if you have to use peak detect sometimes.
In summary:
1. For different uses, use different tools.. There is no "one scope to rule them all". If you do really various things, chance is you will have a need for different scopes. I have 3 at the moment.
2. Peak detect is a workaround for acquisition memory that is not long enough to sample fast enough to keep Nyquist criteria for longer time bases.
3. If you get to the point that your scope undersamples, peak detect is more accurate representation of signal shape on screen than without peak detect, where you basically get random data. Actually, it is
perfect visual representation of how signal looks .
4. Peak detect is not good mathematical representation of sampled data. It shouldn't be used if you want to take data and mathematically analyse it further.
4. If your scope doesn't have peak detect, and you undersample data, your data will be wrong both mathematically and visually.
5. If you can choose between a scope with long mem and one with short mem, everything else being equal, by all means take one with long mem. It's better. Will you use it all the time? No.
6. If you have more than 2-4 MPoints, most of the time you won't undersample in average work. In average work we look at the detail, not the whole thing.
7. If you need to capture a data and "massage it" later on, high sample rate and long memory will give you more to work with. And not only there is no need for it hen, you can't use peak detect even if you want to.
8. Peak detect gets auto disabled when scope starts sampling at full sampling rate. It is basically undersampling protection, while not perfect, it will prevent you from missing something in a signal sometimes.
So I do like long mem. I also like peak detect. It's just another tool. It can be useful sometimes. Generally, you should look at what you do, and find scope that fits that workflow.
Is it decoding? Do you need history/segmented memory? Max bandwidth? Do you need advanced math, jitter analysis? Do you create a lot of documentation? Do you do this or you that ?
There are many fantastic older scopes. You should choose them by quality (they are already old, you want to squeeze few more years from them before they die) and ease of repair.
You should choose them by their feature set and bandwidth. Peak detect is simply one of the tools. And in my experience, there is never too much tools and too much help. Just make sure you know how to use it and apply it properly.