I am just doing some more testing, if I stop the scope (from in Normal triggering), the data at the bottom does indeed scale with the sample (it does not do this when in normal trigger mode and has only been triggered once, displaying the same sample).
If I stop the scope when I have the glitch present, the glitch goes away, in the same was as if I change timebases, OR if I adjust the trigger position in either direction.
HOWEVER, the sample displayed is not correct, if you look at the first screen shot below, this sample is at 1ms, that is the true sample taken, the second screen shot (under exactly the same conditions, same sample etc) with a faster timebase of 500us if very different, not just twice is big, as you would expect it to be, as per the third screen shot, which is the 1ms sample, stopped and with the timebase increased to 500us (to be effectively the same), these are all set at 1.4M memory, which is when the glitch occurs (it doesn't do this at 7M etc)
Also I checked the YT vs ROLL setting, I already have it set to YT, perhaps this setting should "stick" regardless of timebase setting, so it stays set on the selected mode (digital) triggering, and trigger input. Don't forget it also changes the trigger input from my configured D0 inpu, to CH1 and jumps to auto triggering at the same time, even if I put the timebase back to 20ms CH1 stays, as does Auto trigger (but it does turn digital back on), basically it fails to remember the set configuration and return to it properly, it goes to some kind of defaults overriding what was set, ideally it try to be less smart here, and lets me do what I am trying to do, even if it will be slow to update the screen, I tried setting YT at the slow timebase and then set my trigger back to normal, and the trigger input back to D0, it works fine (but slow as expected).