Runts and display persistence don't enter into what i'm observing. It's a distinct delay between my hand turning a tuning core and what is displayed on the DSO screen. The analogue scope appears to be much more real time than the DSO.
As i mentioned earlier - i'm happy to attribute this to my DSO being peasant specification but i would be interested in seeing how it has evolved though the generations of DSO.
This problem definitely exists even on some high end DSOs. Waveform acquisition rates have gone up which is good but the latency from acquisition to display and from user interface to display can be very noticeable. On the interface side, the modern low and high end DSOs I have evaluated are little or no better than my ancient real time DSOs except in acquisition rate.
DSOs will never achieve the display and interface latency that analog vector CRT oscilloscopes had but for all practical purposes they should get close enough and they can certainly perform better than they do. Latency includes:
1. Waveform record fill time - use a short record length to minimize this. Early DPO style DSOs wrote directly to the display memory and effectively processed acquisitions in real time so they would not suffer from this. I do not know if any modern ones operate this way.
2. Trigger rearm time - some DSOs take a long time to rearm their trigger.
3. Processing time - the waveform record has to be transferred and processed for display.
4. User interface latency - changing the operating parameters may have considerable latency. This seems to have gotten a lot worse with oscilloscopes that use embedded desktop operating systems.
5. The display refresh time - Not much can be done about this but 50 or 60 Hz should be plenty fast. Some LCD displays have a latency of more than 1 frame increasing their latency beyond what the refresh rate implies but I do not know that they are used in embedded systems.