For automated tests, we have an open-source Python API available on the software page that exposes (in principle) the same functionality as the main software. (We know isn't quite the same thing as an interface toolkit.) EDIT 2024-01-13: The serial interface is now documented in Section 4 of the manual.
Yes, the interface currently allows saving only one trace at a time. We will improve this feature to support multi-channel export by the end of the week.
Implemented as of 2024-01-11.
We'd like to clarify what you mean about showing all the data: samples are taken while the trigger-to-sample delay is increased incrementally from t_min (left screen) to t_max (right screen). The data on the screen is the only data that can be reasonably assumed to be up-to-date (within a couple seconds). For example, if you unplug the signal source and then zoom out, the revealed data will be stale.
We choose to erase this potentially stale data. Keeping this data is an equally consistent option. Is this what you are proposing?
By "time marker," you mean a marker at T=0, correct?
EDIT: Implemented 2024-01-11.
The minimum trigger-to-sample delay is 11 ns. We recognize this disqualifies some measurement setups.
During development, we made prototypes with minimum trigger-to-sample delays of ~2 ns, but either at the expense of (1) worse time accuracy of ~3 ps RMS and thus worse ENOB, (2) significantly higher cost, or (3) exceeding USB 3 power budget. We chose lower cost, higher ENOB, and power-over-USB with 11 ns dead time.
(There isn't a good way around the delay itself - any sequential sampling scope will not be able to view the edge it is triggering on, without a synchronized clock signal or an analog delay line.)
Low rep-rate signals:
There are three ways to speed up acquisition for low-rep-rate signals:
1. Decrease timebase resolution (pts/div option in "Timebase"),
2. Decrease number of samples per CDF. [Default 30, not currently exposed in software]
3. Decrease the number of triggers per CDF sample. [Default 4096, not currently exposed in software]
The second option gives limited speedup since you really need >=10 samples to get something meaningful.
The third option defaults to 4096 triggers per sample and decreases with increasing trigger holdoff to maintain roughly constant sweep rate.
We will expose these options to the user (next update, by EOW now implemented). As they affect the reconstruction timescale [User Manual Sec 2.2] and other details, there are also some subtleties we need to explain in the documentation.
(This is one of the things that we hope to demonstrate better in our planned videos).
An ignition system @ 1 ktrig/s would take 30 seconds per sweep @ 3 triggers/sample for 1 kpts @ 10 samples/CDF.
A source @ 120 ktrig/s would take 8 seconds per sweep @ 30 triggers/sample for 1 kpts @ default 30 samples/CDF.
(For your Tektronix pulse source, you would need to tap into a >11 ns pretrigger for CH1 and send the rising edge into CH2.)
Curious to hear your honest opinions on the above.
Edit re: mask testing. We currently do not have this implemented in software. We have this planned, and the firmware has provisions for fast mask testing (i.e. direct counts of mask failures, which is important when you are trying to count rare events).
Edit 2 re: Rewrite the discussion on low rep rates.