So now we have to start carefully distinguishing between memory depth and record length. Marketing will of course advertise the former rather than the later because it will be larger.
How does this limitation make sense for performance when the same amount or even less of the data would be processed? What requires more processing when a smaller part of the acquisition record is displayed and the sample rate did not change?
It has always been record length.
The Rigol DS1000Z, for instance, has 64MB of DDR(2/3) memory tied to the FPGA. In normal acquisition, only 24Mpt of memory is available in any configuration: 1ch/2ch/4ch, with the memory divided between enabled channels. However, the full memory length is available when segmented memory is selected, in individual segments.
Why they made this choice is not understood by me, but it might be because they want two buffers of memory available (plus scratchpad for trigger correction or other management), and alternately stream one or the other to the processor. Or, it might simply be a marketing decision, to not compete with their higher-end models and offer the full memory space in normal acquisition.
In the Siglent, the full record length is available by increasing the timebase, and Siglent have made the decision that the true acquisition length should always set the memory depth. This appears to be a technical decision rather than a marketing decision, but doesn't seem to imply any hardware or software limitation. If you want to capture the full 200Mpt, simply zoom the instrument out, capture the data, and zoom in. If the instrument was continuously capturing the 200Mpt samples, then it would need to acquire and potentially process all of those while displaying only 0.01% of the actual acquisition. This would reduce the acquisition rate considerably.
I've noticed a "Tek Mode" buried in the utility/settings page of the SDS5000X; perhaps that would change this behaviour?