AFAIK the entire memory is decoded (which is what you want. Trust me!). Depending on your memory depth setting and samplerate there can be a lot of decoded data outside the screen. All in all it is not really avoidable to have decoded data off-screen as there will also be a part of the trace which isn't shown. You could use the fine time/div control to squeeze all the data and traces on screen but then the decoder information may not be readable.
It all makes sense, thanks! I have understood my scope a bit more today.
I set out to figure out how much is off-screen. For example, at 10Msa (max) record length:
* At 20ms/div, the scope picks 41.7Msa/s (2.5GSa/s / 60), so the screen shows 41.7MSa/s * 12 * 20ms = 10MSa - the screen shows 100% of the memory
* At 21ms/div, the scope picks 31.2MSa/s (2.5Gsa/s / 80), so the screen shows 31.2MSa/s * 12 * 21ms = 7.9MSa - the screen shows 79% of the memory
The dividers from 2.5GSa/s seem to be: 1, 2, 4, 8, 10, 16, 20, 40, 60, 80, 100, 120, 140... Over the lower samples rates, the screen is 80-100% of the buffer. At higher sample rates with dividers in powers of 2, the variation is higher going 50-100%.
This clearly explains the behavior in my earlier post.
Having tried a few avenues, it would seem that the best way to keep all data on screen and enable precise positioning of the start of decode vs. the trigger is to set the "record length" in the History menu to samplerate * timerperdiv * 12. That seems to work well enough.
I wonder if there is a more direct way, e.g. ask the decode to start at the trigger or have a command to move the trigger to the start of the data (not the screen) or be able to "zoom out" to be able to see the extra off-screen info (zoom only zooms in).