So, we found the root cause of the issue.
First, we misread your reply #477, and thought that the commands themselves would cause a lockup. We can confirm that running the commands, in conjunction with our software, reproduces the behavior you're seeing. Sorry about that.
The main problem is that one of your commands sets the CDF tolerance (% command) to 0.000007, and our software does not reset it to the default (0.01) on startup. We will implement this in the next revision. This solves the "first lockup mode" you're describing, where the software connects but does not trigger.
The reason this causes a lockup is that the timeout for the R command is scaled based on the CDF tolerance (the expected amount of time needed to reach the specified tolerance), and we did not put a cap on this.
In v15 firmware we will get rid of this, and allow the user to specify a max timeout manually, in seconds.
This change will solve the "second lockup mode" to some degree, but not completely. As an example, at very low trigger rates (100 Hz), each R command may need 5 seconds to gather enough data. It's possible to queue up hundreds of these read commands in the serial buffer, effectively "locking up" the scope for several minutes.
This behavior is technically "as designed," but the end result is not desirable. We could of course put a hard cap on the timeout, or decrease the buffer size, but this artificially limits the capability of the scope, and does not completely get rid of the issue. Another idea is to limit the total "queued time" of buffered commands, but the time taken is data-dependent and not predictable. If you have a preferred way to solve this issue, let us know.
On another note, this behavior is unchanged from v13. We did not find it, since our tests did not try fuzzing in conjunction with running the software. We will make this mode part of our testing in the future. The necessary conditions are for the CDF tolerance to be very low (<10 ppm), and the max samples to be very high (>3 million), which is difficult to find with random bytes.
We did check the firmware for any buffer overflow issues, and did not find any. Any commands that would overflow the buffer will be dropped, but should not cause a lockup.
Not that it helps, but on the graphing, LabView allows you to have multiple markers per axis on a graph. The horizontal would normally be 0 - 10 for example, but I can have a second scale of say 10n - 10.5n on that same axis. I can also manually scale the graphs. So, I send the data to the graph and set the min and max horizontal to a half sample from the actual min/max. I then set the second scale to what the actual time is. There's no math or anything to track. It's very clean to do it this way and I don't have a dead spot on the graph which is sounds like you will have based on your description. If I tell the scope to sweep from 10-11, I am not expecting the graph to be from 9-12 with data only showing up between 10 and 11. I expect it to start at 10 and stop at 11.
We've attached a screenshot of the updated scheme - this should be identical to your proposal in #478.