From what the video shows, no fault needed to explain what is observed. the scope is workin gfine, it is an alias, plain and simple.
Hit the scope with 100MHz, and sample at what? The timebase setting is 2 nanoseconds per division or displaying a total time, screen left to right of (menus on) 20 nanoseconds. With a single channel running, the digitiser is sampling at 1 gigasample per second, you have got only 20 actual valid samples to show two complete sine waves - marginal. When you switch to two channels, there are only 10 samples as the digitiser is shared between the two channels. To reconstruct a rough representation of a sine wave you need at least 10 samples per sine wave. Alias. Nothing special, no errors, just undersampling. The loss of amplitude demonstration is also no fault. Sinx/x interpolation is the worst most stupid concept ever invented, and it has caused huge amounts of grief. I sold high end DSOs for a few years, and saw heaps of side by side shoot outs, and quizzical customers. Sinx/x interpolation tells lies (I'm not exaggerating) by drawing nice sine waves through miserably undersampled waveforms - they look pretty, but bear little resemblance to what might actually be there. Don't use sinx/x ever, is my recommendation, it just tricks the brain into thinking "It looks good, maybe it's true." It is downright dangerous. LeCroy knew this and never had sinx/x interpolation on any of their earlier high powered scopes - but I have not looked at their range for 5 years, so I don't know what they do know. Why it is a good idea to ony have linear interpolation? - if you under sample, the waveform looks like the Italian Alps and it is so distinctive, you know immediately to not trust the results as it is undersampled. Many people seem to get it into their heads that 1 nanosecond sampling time is slow - it's blingingly fast, and we can't go a lot further without some serious money or new physics. A light beam travels only 0.3 metres in that time for heaven's sake! For the price, Rigol's 1 nanosecond is quite amazing.
There is a real problem in all of these small DSOs though. And it is every bit as serious, maybe worse, it's display alias. Assume 2 channels, say 1millisecond/division, a single shot sweep captures 240k points at 20 megasamples per second - which is an impressive amount of data. Say there is a glitch in there, with just one data point 3 times higher than a long series of TTL pulses. Will you see it? It's highly unlikely. The actual acquisition memory will be filled to 240k, but only about 112,500 of these points are displayed, or are they? Since the waveform display area, with menus off, is only about 280 pixels wide. How are the 112,500 data points reduced to be shown on the 280 horizontal pixels of the display??? There is your problem, the data is decimated, chopped out and reduced to only showing approx 0.28% of the total data points. How on earth can you expect to see a glitch or spike with that??? Your chances of seeing it are very very slim. That is a far more important issue than the screen update rate. Who cares if it updates at 1000s per second, I can't see 24 frames per second with any reliability, and LCDs are slower to respond than that. This makes the DSO is almost totally blind to spikes, and all that amazing acquisition memory is effectively rendered useless.
So, what is that you're saying? Switch on that annoyingly slow Peak Detect mode? Why should I do that? I didn't see anything wrong. Use zoom, and wind the knob through 112 thousand data points? That is tedious as hell, and I didn't see anything that would make me the slightest bit suspicious that it was necessary. Unless someone can explain how the DSO gets around this problem, I think we have a deal breaker for cheap DSOs.
Granted, if my primary work was unravelling the nightmare of USB or serial eeprom comunications, I could pretty much ignore glitch display, as I'd be using zoom all the time, to grind my way along the serial data stream and work out what was happening. By comparison, an analogue scope scans the screen for for 50% or more of the time (the invisible retrace is quick). Between the slow response of the human eye and the phospor, I'd see a flicker that would make me suspicious.
I have an older LeCroy 9310M. If set to its max 50k acquisition length, then display a waveform 50k long with one single point at 15 volts over the top of a 0-5V pulse stream, and you will see that point on the screen without fail and without having to invoke any capture systems or take any other activity. It it's there, it'll show you. That is what all DSOs need to do, to be as useful as an analogue scopes.. I dearly hope that someone can explain why the Rigol 1052E will show this in it's display, I would buy one - getting parts for the LeCroy is a nightmare and I know I won't be able to keep it going for ever, but I don't want to only have a 1 in 400 (0.25%) chance of seeing something that turns up, and sure as eggs, it's the thing you don't see that will bite you on the rear...
Cheers, Colin