Jitter aside, the rise times, etc., will be identical in hires mode so I'm not sure where that number came from.
"High Resolution"
The average of n captured sample points is recorded as one waveform
sample. Averaging reduces the noise, the result is a more precise
waveform with higher vertical resolution.
"Average" The average is calculated from the data of the current acquisition and
a number of consecutive acquisitions before. The method reduces
random noise. It requires a stable, triggered and repetitive signal.
The number of acquisitions for average calculation is defined with
"No. of Averages"
That is from Lecroy manual but description is accurate.
"... description is accurate." ?
Those descriptions are not complete, are quite simplified, and claiming that the "averaging" gives a more precise waveform is IMHO a bit misleading, or it should at least explain the cases where the result will not be more precise. As an extreme example, a suitable long average over a suitable stable periodic signal would give a flat line; yeah, it is more precise in showing the average value, but completely incorrect in showing the input waveform, for which oscilloscopes are mainly meant
Those descriptions give a good start for some people, but one can not really go very far with only that info. With (quite) a bit of math knowledge, one can derive the missing effects by oneself, but, when the descriptions are at the level shown above, they are clearly aimed at people who would not be able to do those derivations
(Even beginners on this category of math do not need to be even mentioned what averaging will do for noise, specifically to random noise, but there they are.)
An example of missing info: What does the averaging of sequential samples (the "high resolution" above) do to bandwidth or step response?
An example on potential difference between theory and explained feature: Averaging multiple (well triggered) waveforms/acquisitions (the "average" above) would theoretically allow both reducing noise and giving higher resolution (with some limitations), not only reducing noise. So the result depends on the implementation.
So, yeah, I would not call them "accurate" descriptions
On the terminology:
As is obvious from above, both methods use averaging (but samples are chosen differently), and both can/could theoretically give higher resolution (and yet do not have to give higher resolution) and both can give lower noise, so which term to use for which? Lecroy has chosen that particular way, and it is good they explained it in the manual, even if that crudely. Other people might have ended up using the terms in a different way.
IMHO, better terms would be by the method, e.g. "moving average" and "multi-waveform average" or something like that, because those are unambiguous (or at least less ambiguous), though the even still the chosen effects and trade-offs are unknown unless explained in a manual (especially, if they give also higher resolution and not just noise reduction). But then again, the less-mathematically aware users would then not know what basic effects those will have (unlike "high resolution" which directly tells something to everyone). Without other explanation, "high resolution" effect could be achieved by multiple methods, each with different set of side-effects. Lecroy (and I think many other scopes) decided to mix and match, one feature termed by the method, other by the apparently most important effect. It gets even weirder when one mode is "hires" and another "eres"... which one does exactly what and how?... I wonder how many knows the answer without peeking into the manual (or scope's help popups).