So if an example 100MHz scope screen has 1000 pixels horizontal resolution, and the time base is set to 5ns/div, it will display 50ns of data spread over 1000 pixels, or 50ns/1000 = 500 picoseconds per pixel. Since one pixel is the minimum shift between two signals that can actually be displayed on the screen, it means the trigger has to reliably resolve a little better than 500 picoseconds.
Oddly enough that is often exactly how it works at least for oscilloscopes where the processing record and display record are identical but see below about time delay counter resolution. Sometimes the specifications do not give the timing resolution but it can be worked out from the number of horizontal pixels per division. If the automatic measurements use histograms, then the actual measurement resolution is even higher.
Wouldn't the digital channels have to resolve phase exactly the same way for the MSO to function properly?
If they did not, then there would be considerable jitter between the MSO inputs and the oscilloscope trigger, and that is usually the case. Usually the MSO timing resolution is limited to the logic input sample rate. In the past, implementing a time delay counter on every logic input would be prohibitively expensive except for specialized applications but maybe someone manages it now with increased digital integration. Most applications still do not need it though.
That is why I asked about triggering off of the MSO inputs to display a vertical input; they might have made special provisions to avoid jitter like I described earlier. But working in the opposite direction by triggering off of the oscilloscope to display the MSO inputs should always reveal the relatively coarse sampling of the MSO inputs.
Going back to the discussion of cheap scopes: If triggering is based purely off the digitized data stream, you could get reasonably close to the same performance (given a 100Mhz scope) with 1GSa/s by having less pixels horizontal resolution across the screen? [Edit: in the sense that the display is the limitation, not the trigger, so the package as a whole "works as it should". The performance overall is of course still lower than having a 500ps resolution trigger on a higher resolution screen.]
You could, but there is no need to, and it would look pretty bad. If the sample rate is high enough, which it has to be to allow digital triggering to be used, then after the trigger occurs and the acquisition is captured, a second stage of triggering can be done on the interpolated waveform to align the acquisition on the display. Or it can be done during triggering with some extra logic.
On older DSOs which do not support digital triggering, usually the time delay counter provides all of the alignment information. The actual resolution of the time delay counter is much higher than the resulting horizontal resolution which allows for automatic calibration for each timebase setting as needed. The time delay measurement circuit can be pretty crude yet deliver accurate results after calibration for zero, span, and linearization which is surprisingly easy.
I did some experiments on the 54622D, comparing the analog trigger with triggering from a Digital channel.
As you predicted, the jitter on the digital signal is much worse than analog - 5ns exactly - which corresponds exactly to the 200MSa/s sample rate:
However... the trigger doesn't seem to be to blame! As an experiment, I switched triggering between the digital channel and the analog channel, and the jitter looks exactly the same.
In a further experiment, I looked at the delay between the leading edge of the test signal and the trigger output (external connector on the back of the scope) with a HP 5335a Counter. It seems that when the trigger is set to the same voltage on both Digital and Analog, the trigger performs exactly the same, and there is no difference in the jitter on the trigger, whether using the analog or digital channel as the source.
It seems that unlike what happens with the Analog channels, no attempt is made to re-align the displayed signal with the actual trigger point - instead, it is always aligned to the sample clock. So the trigger itself appears to always a "precise" trigger... it's just that the scope doesn't make use of the information, for whatever reason. There are probably good reasons for it (e.g. can you trust all the pulses in a train to need the same adjustment as the one you happened to trigger on).
[Edit] By switching off one of the analog channels, the scope switched to an interleaved 400MSa/s mode on the single digital channel, cutting the jitter in half to 2.5ns. All other behavior was the same.