It doesn't decode from screen buffer. I decodes form memory and in hardware at that.
But that statement is being confusingly used here.
Like 0xdeadbeef said, some scopes can be setup to show much less on screen but to sample to the full extent of sample memory. So your screen shows 1 ms of data, but scope went on and actually was sampling for 100 ms. And you can decode all of it, and when stopped, you can browse data, zoom in and out and all.
Some scopes sample only interval that is is needed for one screen scan. So will be able to show you and decode only screen time worth of data. In which case, if you need more, you don't force memory to sample "after" the screen, but change time base to "see" all the data of interest. Scope will try to keep sample rate as high as possible, so it will expand how much samples it will take. If you have enough memory you can "squish" 100 packets on the screen to the point that you cannot recognize them visually. But they are there in sample memory in full detail, it will decode them all, and you can then select zoom function and magnify portion of the capture.
To me that is more intuitive "scope like" behaviour.
On the other hand, for data analysis, you think in terms of sample rate and number of samples (which more like Lecroy philosophy, understandable considering their legacy)
I suggest that we stop using phrase "decode from screen" because it's confusing. It's either "decodes from data decimated for screen buffer"(which is bad) or "this scope decodes only time interval shown on screen", in which case you need to set timebase to put on screen whole extent of data you are interested in.