Author Topic: Extract precise amplitude and phase from a frequency sweep (VNA from DSO+AWG)  (Read 7596 times)

0 Members and 4 Guests are viewing this topic.

Online RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6701
  • Country: ro
VNA = Vector Network Analyser
DSO = Digital Storage Oscilloscope
AWG = Arbitrary Waveform Generator
ADC = Analog to Digital Converter
PC = Personal Computer


The plan is to measure the response of an unknown circuit over a range of frequencies.
- the input/reference signal is a sinusoidal frequency sweep, with constant amplitude and with all its parameters well known
- the unknown circuit to measure can introduce phase shifts, changes in amplitude, and can produce harmonics
- reference signal is considered ideal, measured signal can become very noisy

In practice
- there is an AWG that generates a frequency sweep from Fmin to Fmax in time T
- the ADC of a DSO is triggered at the beginning of T, and it samples the frequency sweep at a constant sampling rate, S, over the entire period, T
- the reference signal is considered noiseless, but the measured signal along a sweep can become very small and noisy
- after the frequency sweep is completed, the ADC samples are transferred to a PC for post processing

All the parameters are well known, Fmin, Fmax, T, S, same as the variation type between Fmin and Fmax (usually logarithmic).  The reference signal can be either sampled together with the unknown response (meh), or reconstructed in software starting from the known sweep and its Fmin, Fmax and T (preferable).

Quantitatively, about 10-20 million of 8 bit ADC samples can be taken during a frequency sweep, and the data transfer alone (from the DSO to the PC) takes a few minutes, so the data post processing in the PC doesn't have to be real time, but precision and resolution are very important.


How to post process the ADC samples so to extract the spectrum, the amplitudes and the phases at any moment along the given frequency sweep, with best precision and resolution?
« Last Edit: November 04, 2022, 10:42:14 am by RoGeorge »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6836
  • Country: fi
    • My home page and email address
How to post process the ADC samples so to extract the spectrum, the amplitudes and the phases at any moment along the given frequency sweep, with best precision and resolution?
As you know, the root is in Fourier analysis, which converts a signal from the time domain to the frequency domain.  Here, you will be using a discrete Fourier transform on the ADC samples to obtain the discrete spectrum.

The problem is that you really need to work in a mixed time-frequency domain, with a time-varying signal.  We achieve this by applying a window function to the samples being considered, before doing the DFT.  The size of the window (number of samples) sets the minimum frequency (since it cannot capture information on waveforms longer than the window itself), and the shape of the window defines the spectral leakage.  In essence, we use a window to split time into "frames", and grab the spectrum from each "frame".  Note that the windows are not sequential (except for rectangular windowing), and typically overlap.  Overlap by one half is typical.

There are a lot of window functions to choose from, with rectangular, Hann (NOT "Hanning"), and Hamming windows probably the most common ones.  Note that with 8-bit input samples, you only have about 48 dB of dynamic range anyway (or rather, you will have a quantization noise at all frequencies at around 48 dB level).  You can also reconstruct the original signal from the windowed spectra by inverse Fourier transform, then windowing those with the same windowing function (the same way), and adding the windowed signals together.  This is exactly how MP3 audio works, after all.

If you implement your own program, then I definitely recommend using FFTW, included in most numerical computing packages.  It is ridiculously fast, especially if you "gather wisdom on the FFT window sizes you use", i.e. tell it to find and then save the fastest way to compute a real-valued DFT of a specific number of samples: this can make an order of magnitude difference.  You will take the input samples, convert them to floating-point while multiplying each sample with the corresponding window function coefficient, do FFT on it, and receive half the number of input samples of complex coefficients, and then zou vill be happy.  Each coefficient corresponds to a frequency, with magnitude (|z| = sqrt(r²+i²) = cabs(z)) describing the intensity/magnitude of that frequency component, and phase angle (arg z = atan2(i, r) = carg(z)) the phase of that frequency.

Because of spectral leakage, you will need to estimate the peak frequency (and optionally the peaks of the harmonics): they won't be nice one-complex wide peaks.  Depending on the width of the peak, there are various methods, but usually a parabola down to half the height of the peak works well.  For the exact peak phase and amplitude, you can use linear interpolation between the bins immediately above and below, or e.g. cubic interpolation; although note that cubic interpolation can overshoot, whereas linear interpolation usually undershoots a bit.



At this point, if I were you, I'd be probably a bit sceptical: so much math.. how can one be sure it all works correctly?
The answer is, one can trivially check and verify.  Generate test signals (as both ADC values as well as "perfect" floating-point samples), and examine if the results correspond to the expected values.  Add white noise (using an excellent PRNG, say the high bits of Xorshift64*), and see how that shows up.  Compare the "digitized" (converted to N-bit ADC values) and the "perfect" (original floating-point) signal, to see how it shows up.

Just do not forget how sinusoidal waveforms work and, say, have the input be "two" sinusoidal signals of the same frequency but at different phases, and then wonder why you get such odd results.  For example, \$sin(x+a) + sin(x+b) = (cos(a) + cos(b)) sin(x) + (sin(a) + sin(b)) cos(x)\$, meaning that you actually then get just a single sine wave whose phase and amplitude depends on the phases (\$a\$ and \$b\$) of the original signals.

In other words, there is no need for trust here, because one can always verify all of this as carefully as one wants.
 
The following users thanked this post: RoGeorge

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
Octave?
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6701
  • Country: ro
I'm familiar with FT and windowing, but don't have much hands on experience.  I've read at some point about the Wavelet Transform (WT), which is about the same as FT just that it still retains the time evolution of the spectrum.  If I got it right after reading mostly the words than the math, WT might give better results than FT+windowing.  (posted the 101 WT some time ago here: https://www.eevblog.com/forum/chat/fun-for-nerds/msg3554257/#msg3554257 )

Either FT or WT, neither of them can take the advantage of the extra knowledge we have, being that the reference signal is known, and its evolution is synchronous with the unknown measured signal.  Would be a pity to throw away this advantage.  As an example, an interferometer is so sensitive exactly because it takes advantage of the reference signal.  Same does the synchronous rectifier or the lock-in amplifier, they have a clean and synchronous reference they can compare against.  The lock-in amplifier even does the quadrature trick, where synchronous sin and cos signals are generated internally (in lock with the reference signal), thought those are usually working at a constant frequency.

Thinking as we speak, since at any moment the reference frequency/amplitude/phase are known, it should be possible to generate a 90 degree shifted signal with the same sweep (a sin and a cos).  I think having sin and cos of the reference swwep would be an advantage, because sin2+cos2=1 at any moment.  This would mean the amplitude can be known by looking at a single sample point (1 sample from the cos and 1 sample from sin), without waiting for an entire period to detect the peak of the signal.

Knowing the amplitude at the moment of each sample will be of a great advantage, can be averaged later to get rid of the noise.  Synthesizing in software a sin and cos in sync with the reference sweep should be possible, the problem is, can a similar quadrature signal be reconstructed for the measured signal, too, by looking at the entire captured data? 

It feels like it should work, but I might be completely off, have to draw the idea on paper before it vanishes.
« Last Edit: November 04, 2022, 09:26:37 pm by RoGeorge »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6836
  • Country: fi
    • My home page and email address
I've read at some point about the Wavelet Transform (WT), which is about the same as FT just that it still retains the time evolution of the spectrum.
One way to intuitively think about it is that each frequency bin has its own window in most wavelet transforms.

As to wavelet transform compared to windowed Fourier transform, on a sinusoidal input signal, it's actually hard to say.  For example, the Wikipedia article on wavelet transform mentions "Wavelets have some slight benefits over Fourier transforms in reducing computations when examining specific frequencies. However, they are rarely more sensitive, and indeed, the common Morlet wavelet is mathematically identical to a short-time Fourier transform using a Gaussian window function. The exception is when searching for signals of a known, non-sinusoidal shape (e.g., heartbeats); in that case, using matched wavelets can outperform standard STFT/Morlet analyses." with a couple of references.

Either FT or WT, neither of them can take the advantage of the extra knowledge we have, being that the reference signal is known, and its evolution is synchronous with the unknown measured signal.
But do we know what the measured signal is?

If we were just measuring the (frequency-dependent) signal delay, we could use (windowed) cross-correlation of the two signals, as a function of the time delay \$\tau\$.  The peak in this function corresponds to the time difference when the two signals were most alike, for each window, and the value at the peak corresponds to the amplification/attenuation of the signal.  However, correlation does not mean exactly the same, so it does not capture information like frequency dispersion.

No, I do believe that when working with numerical data coming from a sinusoidal reference signal, we're best off with the FFT I outlined.  I actually kinda like it more because it does not make any assumptions about the signal, which makes it easier to test for correctness, and measure actual accuracy et cetera.
 
The following users thanked this post: RoGeorge

Online Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11712
  • Country: my
  • reassessing directives...
are you going to measure S21 only? or need S11? if so you need directional coupler and deal with how to get -60dB or lower resolution... btw FFT is the easiest part... https://www.eevblog.com/forum/testgear/admittance-measurements-with-dso-awg-with-bode-function/msg4496851/#msg4496851
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 

Online RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6701
  • Country: ro
This is the principle, talking about the reference signal first:



- the green trace, V(sin), is the reference signal from the AWG, the one that sweeps in frequency over time
- the yellow trace, V(cos), will have to be reconstructed in software.  This is possible from only one sample, and for each sample, because at any moment we know the value and the frequency of the reference signal.
- then each sample can be square to get the magenta and the grey traces respectively, V(sin2) and V(cos2).  Note how the square signal is also a sinusoidal, always positive, and in antiphase (also has double the frequency of the original reference).
- now if we add these two, we get a constant value proportional with the amplitude of the reference signal, just that we can compute it for each and every signal, even for the moments when the reference signal (green trace) is crossing through zero.

A single sample, even when zero, can tell the amplitude.  ;D



Now, can we do the same for noisy signal to measure, coming from an unknown circuit?  Let's assume at first the unknown circuit is linear (LTI) therefore the unknown circuit can change the phase and/or the amplitude, but can not introduce new harmonics
- the measured signal from the unknown circuit will have noise, a different amplitude and a different phase than the reference signal
- however, the circuit doesn't know the difference between a sin and a cos, so it will alter the reference signal with the same change in amplitude and in phase no matter it's sin or cos.  This means that we can produce (in software) a quadrature signal for the unknown measured signal, similar with how we did for the reference signal.

And here is a little stretch (more assumptions about the measured signal), trying to benefit from additional knowledge we have about the reference signal:
- we know the frequency at any moment
- we know sin and cos are in fact the same function, but shifted in time with 90 degrees (it's easy to observe this in the green and yellow traces)
- so to reconstruct the cos signal in software, we just copy the sin with a pi/2 delay.  This pi/2 (delay) varies with frequency, but the frequency is known at any moment relative to the start of a chirp, it's the same frequency as in the reference signal.

It looks like the same reconstruction can be possible for the measured signal, too, assuming the amplitude and phase doesn't change dramatically over the time of a quarter of a period, which I think it's safe to assume for a slow enough sweep.

TL;DR the amplitude of the unknown signal can be deduced from a single sample (once we have acquired and post-process enough samples along the frequency sweep, so we can copy the sample from pi/2 behind as the cos of the measured sin(x)).

It looks like, there is no need of FFT, or DFT, or WT when DUT is a linear circuits.  A variable time shift with pi/2 to approximate cos(x), then square the two samples for sin(x) and cos(x), then adding their squared value will get the amplitude frequency response at each sample, which amplitude can be averaged to filter out a part of the noise.

Should work.  Or am I missing something?




The tricky part comes if the unknown circuit is non linear.  This will add harmonics, and intermodulation products, and so on.  The previous trick might not be applicable, but at least the fundamental frequency of all the harmonics is known.  Not sure what to do for the non linear circuits.  I need to think more about it, and here it's already early AM hours.
« Last Edit: November 10, 2022, 01:28:46 pm by RoGeorge »
 

Online Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11712
  • Country: my
  • reassessing directives...
..the one that sweeps in frequency over time..
ooh ok... thats complicated...
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
a(t)-->f_LTI()-->b(t)

a(t)=k1*sin(w*t+p1)
b(t)=k2*sin(w*t+p2)

wanted: {k2/k1, (p1-p2)}

fft(a(t),w)={ f[-w,w] , phi[w] } = { k1/2, k1/2 }, { p1 }
fft(b(t),w)={ f[-w,w] , phi[w] } = { k2/2, k2/2 }, { p2 }

The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27856
  • Country: nl
    • NCT Developments
How to post process the ADC samples so to extract the spectrum, the amplitudes and the phases at any moment along the given frequency sweep, with best precision and resolution?
As you know, the root is in Fourier analysis, which converts a signal from the time domain to the frequency domain.  Here, you will be using a discrete Fourier transform on the ADC samples to obtain the discrete spectrum.
For this kind of problem, FFT is likely not the best solution due to noise. I'd look at synchronous sampling / demodulation techniques that tend to cancel noise. Keep in mind that an FFT bin might not be an exact match for the frequency that is used for the measurement. With synchronous sampling / demodulation you can get much better filtering of a specific frequency compared to FFT (which is always a range of frequencies).

My approach would be to implement an IQ demodulator for both stimulus and resulting signal. The IQ signal amplitudes from the stimulus give a phase/frequency error which can be used to adjust the LO frequency used for demodulation. The adjusted LO frequency can then be  used to demodulate the resulting signal into an I and Q signal that provide real and imaginary amplitudes (after a low pass filter).
« Last Edit: November 05, 2022, 02:06:46 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 
The following users thanked this post: RoGeorge

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6836
  • Country: fi
    • My home page and email address
Keep in mind that an FFT bin might not be an exact match for the frequency that is used for the measurement.
That's exactly why I mentioned using a parabola down to half the height of the peak in my first post.

Basically, the windowing function will always cause spectral leakage, so a sinusoidal signal will show up as a multiple-bin wide peak.  Instead of picking the frequency bin corresponding to the highest amplitude, you fit a parabola to the samples comprising the peak, \$y_i = a - b (i - i_0)^2\$, where \$i\$ is the sample index, and \$a\$, \$b\$, and \$i_0\$ are the variables you fit to, using e.g. linear least squares.  \$a\$ is then the peak amplitude, \$i_0\$ (which is a real and not necessarily an integer) is the peak in fractional samples, and \$b\$ is related to the width of the peak.

For example, if you had a simple peak with amplitude \$a_0\$, with preceding sample having amplitude \$a_{-1}\$, and succeeding sample having amplitude \$a_{+1}\$, both preceding and succeeding sample amplitudes less than \$a_0\$, then a parabola fit tells the actual peak is at frequency bin \$x\$ relative to the peak frequency bin,
$$x = \frac{a_{-1} - a_{+1}}{2 a_{+1} - 4 a_0 + 2 a_{-1}}$$
(and you expect \$-1 \lt x \lt +1\$); the peak amplitude itself is \$a\$,
$$a = a_0 - \frac{ \left( a_{-1} - a_{+1} \right)^2 }{ 8 a_{+1} - 16 a_0 + 8 a_{-1} }$$

Do note that I am assuming we are working on already digitized data.  I'm pretty sure there is analog circuitry that can do better, too, especially because 8-bit ADC causes quantization noise at about 48 dB; but I have no idea how to go about it that way, I can only help with the discrete math here.

(This peak fitting stuff is quite common in computational physics.)
« Last Edit: November 05, 2022, 03:09:18 am by Nominal Animal »
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15254
  • Country: fr
Yep.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2812
  • Country: nz
Have you considered passing the samples through a Hilbert transform filter to convert the real 8-bit samples to complex, then convert those values to polar values (e.g. atan2 and magnitude).  The (smoothed) angular rate of change will indicate frequency, the smoothed magnitude will indicate amplitude.

One problem will be if the circuit under test has 'memory' - e.g. the output depend on earlier inputs. For example, a resonant circuit may keep on ringing, so you may get different results if you sweep up or down, or sweep slowly or quickly.
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: RoGeorge

Online RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6701
  • Country: ro
are you going to measure S21 only?

At first, yes, something like a Bode plot will be great, then add later more VNA-like functionality.




a(t)-->f_LTI()-->b(t)

a(t)=k1*sin(w*t+p1)
b(t)=k2*sin(w*t+p2)

wanted: {k2/k1, (p1-p2)}

fft(a(t),w)={ f[-w,w] , phi[w] } = { k1/2, k1/2 }, { p1 }
fft(b(t),w)={ f[-w,w] , phi[w] } = { k2/2, k2/2 }, { p2 }

Yes, something like that, just that I think I'm up to something new for this particular case, where we have a known reference sweep:  Instead of taking a FFT, which can only be computed on batches of samples, it looks like with the method I've described above it would be possible to compute all on the fly, sample by sample, and with the extra advantages in filtering out noise, much thinner bins than a FFT, and without spectral leakage.




the windowing function will always cause spectral leakage, so a sinusoidal signal will show up as a multiple-bin wide peak

AFAIK a FFT can only be computed on batches of many samples, thus the need for windowing, too, and a limited number of bins.  Looks inescapable, but with the method I was describing in the attached plots before, it will be no need for windowing or binning.  Amplitude and phase can be computed on the fly, sample by sample (at least for LTI circuits).




Have you considered passing the samples through a Hilbert transform filter to convert the real 8-bit samples to complex, then convert those values to polar values (e.g. atan2 and magnitude).  The (smoothed) angular rate of change will indicate frequency, the smoothed magnitude will indicate amplitude.

One problem will be if the circuit under test has 'memory' - e.g. the output depend on earlier inputs. For example, a resonant circuit may keep on ringing, so you may get different results if you sweep up or down, or sweep slowly or quickly.

My calculus skills doesn't include Hilbert transform, I don't know what it does, or how to handle the math for it.  :-\

I'm not sure if my method still holds for something like a resonant circuit, it feels like it might still work, can't say for sure.  Will have to think more about that, and do some simulations to double check, thanks for pointing it out.

Offline gf

  • Super Contributor
  • ***
  • Posts: 1321
  • Country: de
I'm not sure if my method still holds for something like a resonant circuit

Generally, you can only capture the complete response of the DUT to a particular stimuls if the measurement interval is longer than the duration of the stimulus + the duration of the DUT's impulse response (until it decays to the noise floor).
 
The following users thanked this post: RoGeorge

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
Have you considered passing the samples through a Hilbert transform filter to convert the real 8-bit samples to complex, then convert those values to polar values (e.g. atan2 and magnitude).  The (smoothed) angular rate of change will indicate frequency, the smoothed magnitude will indicate amplitude.

a(t)-->f_LTI()-->b(t)

a(t)=k1*sin(w*t+p1)
b(t)=k2*sin(w*t+p2)

wanted: {k2/k1, (p1-p2)}

f(t): real ---> H(f(t)) ---> h(s): complex
w: real ---> s: complex
LP: low pass filter

Code: [Select]
get_sample(a(), b());
loop(i=0..n-1)
{
    LP(polar(H(a(t),w)))=polar(LP(cplx(a[s]))={ mod(LP(cplx(a[s])) , ang(LP(cplx(a[s])) } = { k1[i], p1[i] }
    LP(polar(H(b(t),w)))=polar(LP(cplx(b[s]))={ mod(LP(cplx(b[s])) , ang(LP(cplx(b[s])) } = { k2[i], p2[i] }
}
k1' = average_smoother(i=0..n-1){ k1[i] }
k2' = average_smoother(i=0..n-1){ k2[i] }
p1' = average_smoother(i=0..n-1){ p1[i] }
p2' = average_smoother(i=0..n-1){ p2[i] }

interesting approach  :D


One problem will be if the circuit under test has 'memory'

indirectly it's a good way to "test" the circuit memory.
If it has no memory, results are sweep-independent.
If it has memory, results are sweep-dependent

you invented a "memory meter" :o :o :o
(Ask for a patent!  ;D )
« Last Edit: November 05, 2022, 03:39:04 pm by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Kleinstein

  • Super Contributor
  • ***
  • Posts: 14736
  • Country: de
The hilbert transformation is a bit like an I/Q mixer, generating 2 new signals that can be interpreted as the real and complex part of an intermediate frequency that can be choosen quite low (near zero). One still has to choose resonable low pass filtering / filter window.

If one can limit the memory of the system to a small number of resonances at a time (or close to the frequency of interest) one could in theroy still separate it in relatively short time: One could do a curve fit to a part with the stimulus frequency and an additional exponentially decaying sine. A simple system with just 1 resonant part gives an exponential decay, just the amplitude and phase are kind of unknown.  Especially for a high Q system one should get away with well less than a decay constant worth of data. Still the curve fit needs a low of computational power. Once a resonant part is found the frequency and approximate decay constant would be known. So later fits would not be that bad. The parameters amplitde and phase can be transformed to sine and cosine part and thus linear parameters that do not need an iterative fit. The tricky step would only be finding a new resonance, if present in the frequency range.
 
The following users thanked this post: RoGeorge

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
I think this stuff can be evaluated with Octave (or Matlab, if you prefer).
I mean, before implementing in Python.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline 2N3055

  • Super Contributor
  • ***
  • Posts: 7208
  • Country: hr
FRAforPicoscope is open and has a full source code published.
Take a looksee.
Might explain few things.
He used Goertzel variant...
 
The following users thanked this post: RoGeorge

Offline hubi

  • Regular Contributor
  • *
  • Posts: 57
  • Country: us
My calculus skills doesn't include Hilbert transform, I don't know what it does, or how to handle the math for it.  :-\
The Hilbert transform just performs a 90 degree phase shift without altering the amplitude and eliminates the DC component. It can be implemented as a Fourier filter -i*sign(f). It is difficult to construct compact time domain convolution filters for the Hilbert transformation because the filter coefficients decay slowly with 1/t. One approach I implemented a long time ago for phase estimation from fringe projection images used anti-symmetric convolution masks with a step size of two in combination with acausal recursive prefilters (same filter as B-spline transform, just a different coefficient). This provides pretty good results for just a few filter coefficients over a wide frequency range, and the frequency range can be extended on the low end using Laplace pyramids. This might be an efficient and accurate method to recover the phase and amplitude for your data.

Do you have an example data set for download somewhere? I'd be happy to give this algorithm a try on your data when I return from travel.
 
The following users thanked this post: RoGeorge

Offline balnazzar

  • Frequent Contributor
  • **
  • Posts: 417
  • Country: it
This discussion is very interesting and fun to read. Keep posting, folks  :)
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2812
  • Country: nz
Made a lame attempt at this....

The program generates 8M points of data for a linear chirp (in data[]), from 0.05*Fs to 0.45*Fs, with amplitude ramping down from 1.0 down to 0.5.

It then performs a Hilbert Transform to get x[] and y[], and prints out 200 samples from the start, center and end of the data, along with magnitude and angular frequency.

Attached graphs are 200 data points towards the end of the chirp, and the results of the analysis (mag ~= 0.5, frequency ~= 0.45 * Fs)

The interesting thing to note is that although the data points appear pretty jumbled, the resulting magnitude and frequency values match the parameters used to generate the data.


Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: RoGeorge, DiTBho

Offline hubi

  • Regular Contributor
  • *
  • Posts: 57
  • Country: us
Made a lame attempt at this....

The program generates 8M points of data for a linear chirp (in data[]), from 0.05*Fs to 0.45*Fs, with amplitude ramping down from 1.0 down to 0.5.

It then performs a Hilbert Transform to get x[] and y[], and prints out 200 samples from the start, center and end of the data, along with magnitude and angular frequency.

Attached graphs are 200 data points towards the end of the chirp, and the results of the analysis (mag ~= 0.5, frequency ~= 0.45 * Fs)

The interesting thing to note is that although the data points appear pretty jumbled, the resulting magnitude and frequency values match the parameters used to generate the data.
Seems to work pretty well, I think there is less than 1% error in the selected wave number range. The problem with this type of filter is ringing. I have attached a plot of the imaginary part of the transfer function. The ringing can be reduced with some filter optimization at the expense of the lower and upper cut-off frequency. The second plot shows an example for four filter coefficients, i.e., a 15 coefficient mask including the anti-symmetric half and the zeros.

 
The following users thanked this post: RoGeorge, DiTBho

Offline hubi

  • Regular Contributor
  • *
  • Posts: 57
  • Country: us
I took another look at hamster_nz's Hilbert filter. I think there is a small bug in the convolution loop that omits the last coefficient, but that makes little difference in the results. The error is a bit higher than I thought. I computed the amplitude and phase velocity errors outside of the boundary area and got
Code: [Select]
Max phase vel. error [degree]: 0.288886, std.dev: 0.142669
Max amp error: 0.015821, std.dev: 0.002622
I tested the same type of filter (anti-symmetric, step of two) with an optimized filter mask:
Code: [Select]
static double hilbert_kern_64[64] = {
    0.635616,     0.209204,     0.122352,     0.0840508,    0.0619826,    0.0473452,    0.0367619,
    0.0286638,    0.022225,     0.0169707,    0.0126088,    0.00894956,   0.00586386,   0.00325984,
    0.00106942,   -0.00076,     -0.00227075,  -0.00349816,  -0.00447281,  -0.00522194,  -0.00577038,
    -0.00614116,  -0.00635588,  -0.00643492,  -0.00639756,  -0.00626203,  -0.00604542,  -0.00576376,
    -0.00543187,  -0.00506332,  -0.00467047,  -0.00426433,  -0.00385457,  -0.00344956,  -0.00305631,
    -0.00268057,  -0.00232689,  -0.00199862,  -0.00169807,  -0.00142656,  -0.00118454,  -0.000971719,
    -0.000787179, -0.000629464, -0.000496718, -0.000386775, -0.00029733,  -0.000225911, -0.000170083,
    -0.000127445, -9.57244e-05, -7.27986e-05, -5.67725e-05, -4.59548e-05, -3.88904e-05, -3.43821e-05,
    -3.14574e-05, -2.93609e-05, -2.75466e-05, -2.56537e-05, -2.34504e-05, -2.08759e-05, -1.79182e-05,
    -1.46725e-05};
This provides more than two orders of magnitude in error reduction:
Code: [Select]
Max phase vel. error [degree]: 0.000847, std.dev: 0.000380
Max amp error: 0.000046, std.dev: 0.000007
The maximum amplitude error in both cases occurs at the low end for a normalized wave number of ~0.1, where the amplitude is highest (1). The same relative error also occurs at high wave numbers (~0.9), as the transfer function is symmetric w.r.t. half the Nyquist frequency.

The plot below shows the transfer function corresponding to the kernel above in magenta. Note the difference in scale compared to the previous plot from the truncated hyperbola. The plot also contains a second optimized Hilbert filter using an acausal recursive pre-filter. Using this technique, more compact filters with less error and/or a wider frequency range can be constructed. There are limits to this and the recursive pre-filter will become unstable if one attempts to push the upper cut-off too close to the Nyquist frequency. But the lower cut-off is probably of more concern for practical applications. This end can be extended using a band decomposition and subsampling (Laplace pyramid). Also worth mentioning is that the Hilbert filter itself can be used to eliminate DC bias by applying it twice to create the quadrature pair.
« Last Edit: November 07, 2022, 09:23:33 am by hubi »
 
The following users thanked this post: hamster_nz, RoGeorge, DiTBho

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2812
  • Country: nz
I took another look at hamster_nz's Hilbert filter. I think there is a small bug in the convolution loop that omits the last coefficient, but that makes little difference in the results.

Yes, it was a bug/oversight - it should be 

Code: [Select]
      for(int j = -127; j <= 128; j+=2) {

I'm quite pleased that even my lame attempt actually looks like a possible technique for 8-bit data, as I've occasionally pondered on recovering amplitude and frequency from time series data, but never acted on it to now....

One other thing I have no idea on is how the sweeping of frequency will upset things.
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: DiTBho


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf