You cannot correctly digitize anything that is above 1/2fsampling.
if i understand you correctly you are talking about analog->digital conversion stage. then i agree with you, real continuous analog signal will almost always contains higher elements than 2.5X sampling rate, hence, the real signal will never being captured accurately at this stage. hence any filtering or interpolation afterward is just garbage (GIGO) not worth of discussion.
but let us just assume the digitized signal is "correct" representation of the real signal, ie when the ADC reads, it reads the "exact analog value" (to get that, we may leave to ADC designer to handle it). and the real signal does not contain much high frequency elements at greater magnitude so to render digitization process is invalid. with these 2 rules, we can narrow our scope of discussion to "interpolation in the digital stage". at this stage, to remove higher component frequencies is easy using zero'ed FFT output, this is a "soft" brick wall, and then by using inverse FFT, we will get a cleaned up (filtered) signal, the intention is to get more accurate interpolation in the next stage (hopefully), because if we leave the unwanted frequencies in, they will weigh the interpolation process farther out of the way from the expected real signal.
front-end BW attenuation respond also can be handled/corrected by using another method before doing the interpolation part, if we have more accurate BW respond data of the scope, we can "de-attenuate" the digitized signal in FFT. lets assume that is taken care of, our signal is "flat" now.
now let us further narrow our discussion to make it more managable, our input (digital) data "does not contain higher components", they are "clean" and "flat", except... "sampled at limited rate". now the "interpolation" comes in to solve. for "optimum" interpolation method, the captured (digitized) data "must" be considered to be laid exactly on the interpolated data, or at least "weighed heavily" during interpolation calculation, otherwise they will be just a "toy interpolation". this is what confused me. If Sin(x)/x interpolation does not retain original data, then why it is choosen as the "optimum interpolation"? either Mr Shannon screwed up during his research (but i doubt it since Mr Shannon was so good we all choosed to believe that way) or Agilent did not implement the algorithm correctly. i still have no access to "practical implementation of Sin(x)/x interpolation" to make the claim who actually got "screwed up", Mr Shannon or the rest?
algebraically exactly
english please!
i'm having trouble understanding this term.
but if you use the sampling mode, you can accurately see waveforms that are much higher in frequency then the sampling rate as long as the analog bandwidth is sufficient
only for "similarly repetetive signals" during each trigger/capture. any "jittering" at each signal capture will render a spurious display of a signal by using sampling mode. am i correct?