...
A bit of experimentation later confirms your idea, the averaging builds linearly for the first n captures (displaying progress as it goes which requires normalising) and once it overflows the average count it then runs in IIR mode. Clearing the display readies it to count the next n acquisitions into the average. The UI has no way to automatically stop like it can with segmented captures, it would be interesting to know if the digitise command accurately/reliably stops at the final accumulation acquisitions, rather than having to implement a burst limit to the DUTs triggers.
That was a good idea to check if the digitize command stops at the exact count. The answer is that it doesn't. Looking at the trigger out, the 3000X overshoots anywhere from about 50 to 200 waveforms, no matter what the number of averages is set to.
This was further confirmed by doing one capture of DC 1.000V, changing to 0.000V, and then letting the pulse generator free-run. With Averaging set to 128, the result should have been 1/128 = 7.8mV, but it was always less. An exact burst of 128 did produce the correct result of 7.8mV.
So, the 3000X can keep up with the mmm22's averaging requirements, with the caveat that the result will contain *at least* 65536 samples. I could not find a command that would return the exact count, if that quantity is important to whatever is being measured.
Obviously this still needs to be confirmed on actual 1000X hardware. If the overshoot is not acceptable, some external electronics could be constructed to gate 65536 triggers into the scope, or simply find a different scope that can stop at the exact count.
On a side note, while I had this set up, I did look at the algorithm they're using a bit more and it confirms what we're both saying. Just using the regular Run mode and Avergaing set to 4, I captured one DC waveform at 1.000V, changed to 0.000V, and then triggered the pulse generator manually while measuring the average trace. Here are the readings:
N Avg
--- ------
1 1.000
2 0.500
3 0.333
4 0.250
5 0.188
6 0.141
7 0.105
8 0.079
9 0.059
10 0.044
11 0.033
12 0.025
13 0.019
14 0.014
15 0.011
The first 4 captures are clearly a straight average. But once the Averaging count is exceeded at N==5, the algorithm starts decaying the value at a rate of 3/4. I didn't play with it further, but it's no doubt:
NewAvg = NewPoint/4 + (3/4 * OldAvg)
Or more generally, where N is the Averaging setting:
NewAvg = NewPoint/N + ( (N-1)/N * OldAvg )
Which is a simple and common IIR. Any overshoot captures will drop into this mode. As long as mmm22's waveform is truly repetitive, it will have no effect on the result.
.. also noticed there is some added dead time on the triggers in averaging mode at the LCD frame rate, and the first two captures are directly back to back without a significant overhead.
..............Trigger...Trigger.....(processing averaging) ........ Trigger .....(processing averaging) ........ Trigger .....(processing averaging) ........ Trigger .....(much much much loooooonger delay during offloading to screen ) ........ Trigger .....(processing averaging) ........ Trigger ...... etc
Thanks - I thought something funny was going on there. It was skipping triggers long before I expected when going above 20ns/div.