In the name of testing methods, I’ll try to find not the 1 but all of the glitching input values in a automated way.
How to?
From every input value the output value needs to be checked against the output value of the succeeding input value.
This means checking 65K values. The measuring is done with a 8 bit/256 value DSO. So we know it has to measure in many batches. In each batch a different “input value band” and “output value band” can be measured. At the input things are controlled, but at the output we have to make a good guess about the matching measurement window. Which also means having some margin.
We need to see 1mV changes so the 10 mV/div is a decent sensitivity to measure glitches, it won’t be sensitive enough tho see each individual AWG step. But that’s ok.
With 256 values / 10 divisions that would give a window of 102 mV. In which we should catch the output. But better to take 2 margins of 20 mV and use a window of 60 mV. Which would mean at least 17 windows to see the whole 1 V ramp.
In each batch we need to identify the output of each input. This can be done by guessing the time location of each sample. I think this can be done with a good accuracy if we could identify the start and thus ending of each cycle. This can for instance be done by starting with a minimum, maximum value pair. In the output it is easy to scan for those, everything in between is where the input values are mapped to.
For each input value, I’d like to have at least 128 output samples, using the middle 64 samples to calculate a time based mean (the skipped ones are used as a margin). The DSO delivers 70 kpts, which would mean 540 blocks of 128. But that’s in the case of having exactly 1 cycle, which is not doable. I guess 500 blocks of 128 gives a safe margin to have 1 full cycle in view.
Giving that there’re 65K values, we need 132 batches. (In every batch the last value of the previous one will be the starting one).
Because this is much more than the 17 we needed to resolve enough details. I think we can turn down the Vdiv to 5 mV/div.
With 16K of input samples, we need to create 500 output samples. Which would mean 32 samples for each input value. Because this will leave 384 samples its good to have both a start and stop “signal” to be used for (time based) mapping. The first block will also be duplicated so when measuring the second, there will be no ringing measured in that one.
It would be nice if the whole test would run in less than 4hrs. Which would give about 109 secs per batch. In that time the following will happen:
* input wave constructed
* wave played and triggered
* 378 segments will be downloaded and averaged
* the cycle needs to be identified in the output
* the 500 x 128 value blocks need to be averaged (64 values in the middle)
I’ll be running the script through my DIY script interpreter, fast but certainly not compiled like speeds.
For me this exercise means discovering some new techniques, which is a good motivator. Getting more information on the glitches not so, now that I’m starting to understand that it is not really possible to work around them.
Edit:
In the script I won’t do the actual comparing. Outputting the input value, output voltage and voltage difference to the succeeding one gives some extra information which can be analyzed.