Author Topic: Extract precise amplitude and phase from a frequency sweep (VNA from DSO+AWG)  (Read 7453 times)

0 Members and 2 Guests are viewing this topic.

Online Marco

  • Super Contributor
  • ***
  • Posts: 6880
  • Country: nl
What is actually the advantage of chirp over white noise or step response measurements?

If you actually wanted to estimate the nonlinear behaviour of the system, I doubt you can beat white noise.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15095
  • Country: fr
What is actually the advantage of chirp over white noise or step response measurements?

If you actually wanted to estimate the nonlinear behaviour of the system, I doubt you can beat white noise.

Good question.

One point is that you can fully control the bandwidth of a chirp signal. With white noise, it's more problematic. You'll need very good filtering to do this and still keep as flat a spectrum as possible.
Also obviously, generating white noise digitally requires using very good pseudo-random generators. It's not rocket science, but something to keep in mind.
 

Offline switchabl

  • Frequent Contributor
  • **
  • Posts: 445
  • Country: de
White noise has a high crest factor (actually infinite for ideal white noise). So at a fixed amplitude level, a chirp stimulus has significantly higher power and will give a better SNR.

I would recommend an FFT approach for analyzing the chirp response though. With some care (continuous chirp with correct periodicity) the FFT can be done without windowing. The method discussed above (based on Hilbert transform) will almost certainly lead to artefacts for some DUTs; for example consider what happens when sweeping quickly across a high-Q resonance.
« Last Edit: November 21, 2022, 09:16:40 pm by switchabl »
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 4182
  • Country: gb
So. Step back to fft
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6880
  • Country: nl
White noise has a high crest factor (actually infinite for ideal white noise).

With digital generated excitation it can be coloured a bit, you are going to be dividing the output by the input spectrum any way to recover phase. No need for perfection. With real-FFT based processing to determine a linear response it can be anything wideband, chirp or noise, white or slightly coloured, it all works.
 

Offline switchabl

  • Frequent Contributor
  • **
  • Posts: 445
  • Country: de
Yes, basically anything that has sufficient spectral content in the frequency range of interest will work. But as a rule of thumb, at a fixed amplitude, a chirp stimulus will improve SNR by at least 6dB over a noise stimulus (ideal or not). Or put another way, you could increase sweep speed by a factor of 4 with the same result. And speed is usually a major concern if you implement FRA like this because otherwise you could use a stepped sweep with a frequency selective detector.
 
The following users thanked this post: DiTBho

Online Marco

  • Super Contributor
  • ***
  • Posts: 6880
  • Country: nl
Wait a minute ... doesn't uniform white noise have a lower crest factor than a sine?
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6713
  • Country: fi
    • My home page and email address
A sidestep:

Interestingly, if one was going to use sine-squared shaped pulses at a fixed frequency \$f\$ for a duration \$T\$,
$$A(t) = \sin\left(\frac{\pi t}{T}\right)^2 \sin(2 \pi f t)$$
then each pulse would be a single FFT window, and there would be much fewer details to consider.  In particular, its Fourier transform is known in algebraic form,
Code: [Select]
# f = signal frequency
# T = Pulse duration
# Signal = sin(%pi*t/T)^2 * sin(2*%pi*f*t)
# Fourier transform FT, with F as the frequency parameter,
# as a function of the signal duration T and the original frequency f:
FT(F,T,f).real = ( (f-F) * (T*f - F*T - 1) * (T*f - F*T + 1) * cos(2*%pi*T*(f+F) )
                 + (f+F) * (T*f + F*T - 1) * (T*f + F*T + 1) * cos(2*%pi*T*(f-F) )
                 + 2*f * (1 - T^2*(f^2 + 3*F^2))
                 ) / ( 8 * %pi * (f-F) * (f+F) * (T*f - F*T - 1) * (T*f - F*T + 1) * (T*f + F*T - 1) * (T*f + F*T + 1) );
FT(F,T,f).imag = ( (f+F) * (T*f + F*T - 1) * (T*f + F*T + 1) * sin(2*%pi*T*(f-F))
                 - (f-F) * (T*f - F*T - 1) * (T*f - F*T + 1) * sin(2*%pi*T*(f+F))
                 ) / ( 8 * %pi * (f-F) * (f+F) * (T*f - F*T - 1) * (T*f - F*T + 1) * (T*f + F*T - 1) * (T*f + F*T + 1) );
which means that if one samples the signal generator directly, the signal generator itself can be characterized (at that fixed frequency) by comparing the computed FT and the sampled FFT.

Of course, because it is a single fixed frequency pulse, it is no replacement for e.g. chirps (frequency sweeps) at all.  What it might be useful for, is when a system exhibits complex behaviour, like shifts the input frequency, modulates it, and perhaps generates odd harmonics from it; this gives a simple tool for examining suspicious frequencies that a sweep or other analysis indicates may have something funny going on.

I don't know if this is really actually useful, though; I only think it is interesting!
 
The following users thanked this post: RoGeorge, DiTBho

Online RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6579
  • Country: ro
Had to put aside this topic lately, but kept reading and learning and thinking about it.  This weekend stumbled upon these two videos that opened a new perspective about FFT, and convolution, and many other things that clicked together.  Link to them (again) for the docs.

The Fast Fourier Transform (FFT): Most Ingenious Algorithm Ever?
Reducible


But what is a convolution?
3Blue1Brown



In terms of FFT speed, tried a FFT for 24 million samples and it's almost instant (1.5 seconds), considering the download alone for the ADC samples takes about 20-30 seconds.
Code: [Select]
$ ipython
Python 3.10.6 (main, Nov  2 2022, 18:53:38) [GCC 11.3.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.5.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import numpy as np
In [2]: fake_samples = np.random.random(24_000_000)
In [3]: %%timeit
   ...: fft_samples = np.fft.fft(fake_samples)
1.32 s ± 4.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Online RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6579
  • Country: ro
if one was going to use sine-squared shaped pulses at a fixed frequency \$f\$ for a duration \$T\$,
$$A(t) = \sin\left(\frac{\pi t}{T}\right)^2 \sin(2 \pi f t)$$
then each pulse would be a single FFT window, and there would be much fewer details to consider.

Had to draw that, then it became clear:  sin2 is also a sinusoid of twice the frequency and with a DC component in it, DC equal with the amplitude, so A(t) is a sinusoidal carrier 100% modulated by another sinusoid, like the blue plot.



I didn't understand the second part.  How to use this particular arrangement, or what would be its strong points?
« Last Edit: November 22, 2022, 07:36:36 pm by RoGeorge »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6713
  • Country: fi
    • My home page and email address
How to use this particular arrangement, or what would be its strong points?
You send a single pulse of duration T, preceded and followed by silence.  Note that T>>1/f, at least a few hundred per f (i.e. few hundred full waves).
You capture all data, delineated by silence.
(You will wait until the DUT produces no output signal anymore, before you generate the next pulse at a new frequency f.)

The benefit is that a single FFT over the entire signal describes the signal content.  We know analytically its Fourier transform exactly (and can map it to the expected FFT if perfect digital->analog->digital roundtrip).  Comparing it to the generated signal (no DUT) tells us how the signal generation deviates from the expected.  Comparing it to the measured signal tells us what the DUT did to it.

Because there is no windowing (other than in the original signal, the envelope shaping by sin²) –– you can pad the "window" with zeroes to get whatever FFT size you want, or just use a single window of size matching the duration between silences ––, there are no mathematical compensation needed (except maybe for converting a continuous Fourier transform to a discrete one via integration, but that can be done algebraically): no numerical compensation needed, no convolution.  Examination in pure frequency domain.

The downside is that it tells how the DUT reacted to (this sort of a pulse at) a single, specific frequency, on the input.  The upside is that even if the DUT shifts the signal frequency, or generates some weird fractional harmonics of it, and does not pass any of the signal through at the original frequency, all of this shows up perfectly in the FFT.  So, it is not about detecting amplitude or phase change in the input signal, it is about detecting the spectral effects caused by the DUT to a particular frequency.

For example, it might be interesting to do such a test with f = 50 Hz or 60 Hz, to examine exactly what happens if mains frequency gets coupled to the input of the device under test.  Or, if the device has a suspicious dip or spike in its amplitude response in a frequency sweep, you might do this to examine that frequency in more detail.

Anyway, do remember that I only do data, not RF equipment in the real world!  I'm most comfortable when trying to find interesting stuff in data without any preconceptions or expectations.
« Last Edit: November 22, 2022, 09:02:35 pm by Nominal Animal »
 
The following users thanked this post: RoGeorge

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 4182
  • Country: gb
convolution(a(t),b(t)) in time domain is equivalent to multiply(A(s),B(s)) in complex frequency domain, and vice versa.

A(s) = Laplace_transform(a(t))
B(s) = Laplace_transform(b(t))
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 4182
  • Country: gb
You send a single pulse of duration T, preceded and followed by silence

ideally a pure Dirac pulse  :D
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6713
  • Country: fi
    • My home page and email address
convolution(a(t),b(t)) in time domain is equivalent to multiply(A(s),B(s)) in complex frequency domain, and vice versa.
Sure.  Thing is, if you have a chirp, a frequency sweep, and you window it (with overlapping windows) with a suitable size, and do an FFT of that, you need to account for the effect of the windowing function on the computed FFT, to get a representation of the original signal.

Even then, that FFT is a frequency-domain representation of the time-domain signal during that time window.  Because the frequency changes continuously in the time domain, the frequency domain magnitude spike is always spread out a bit.  (It is also spread out a bit due to the windowing et cetera, but the changing frequency exacerbates that.)  If the rate of frequency change is sufficiently low compared to the FFT size, then the effect is small.

One can take the windowed FFT centered at any point in time, even from consecutive samples.  But even then, the FFT represents the frequency domain of the signal within the FFT time window, not at a particular sample.

The question is, what does one want to find out?  FFT is a poor tool if you want to measure e.g. attenuation or phase change of a known signal, but a good tool if you are interested in the frequency spectrum of the measured signal.

Constructing a shaped pulse of a specific frequency yields an analytically known easily calculated frequency spectrum.  The pulse is self-windowing, so that taking the FFT of the measured signal in one chunk, from silence before to silence after, yields directly the frequency spectrum, exactly because we are not interested in any time-dependent variance in it.  The two can be compared one-to-one, without any convolution needed (to undo the effect of a windowing function).  And taking the FFT of the real-world generated signal, and comparing that to the computed Fourier transform, tells how well the signal generation and capture matches the expected; basically gives a simple test/calibration approach, albeit at a single frequency per test pulse.

Mathematically this is rather lightweight, because the Fourier transform can be calculated as needed (it does not need to be precalculated or stored).

Convolution is just one (complex) multiplication sum per FFT bin for each FFT, so it isn't that costly; that's not the issue.  The issue is its practical effect on the FFT.  Consider input time-domain data consisting of 8-bit samples.  The quantization noise creates a noise floor at about 44-48 dB (it is more or less flat, similar to white noise).  If you then apply any kind of convolution to compensate for the effect of the windowing function on the spectrum, it will apply to the quantization noise as well.  Then, if your compensated samples are mapped back to integers, you have a second set of quantization errors, except this time in the frequency domain (as the convolution then does not exactly match the intended one).  It gets quite fuzzy fast, so omitting the entire step can be a big win in the frequency domain accuracy.

(Edited to add the missing crucial sum in the above paragraph.)



Now, it is important to realize that any signal can be reconstructed from its Fourier transform.  This means that if you do an FT over a single, complete chirp, the FT is a complete description of it: it does describe how it progresses through the frequency, and you can reproduce the original time-domain signal using the FT.  (I do believe this is also algebraically known, if the frequency changes during the chirp in an easily described manner, say linearly or exponentially.)

It is only that it is difficult to see or determine how the time-domain signal evolved during the FFT window from the FFT data itself.
(I don't know if it is even possible in any other way except actually reconstructing the time-domain signal, and observing its characteristics.)

Because of this, a single FFT over an entire chirp, compared to the computed (or measured at signal generator), does describe the full frequency-domain response, but it is very difficult to say which extraneous frequencies were caused by which input frequencies, because that information (albeit embedded within the FFT, since it can reconstruct the original signal) is difficult if not impossible to extract directly from the FFT itself.

To recover that, the measured signal is windowed, into many overlapping windows, with window length corresponding to the lowest frequency one is interested in (taking into account the windowing function effect on the resulting FFT), and FFT taken of each window separately.

What I found interesting, and why I posted these few messages, is that generating a different signal –– a modulated constant-frequency pulse ––, there would be no need for either windowing (because the signal itself is naturally "one window"), nor compensating for windowing function effects.  It would reveal details related to only a very narrow slice of the entire possible input frequency domain, but what it would reveal, would be clear and straightforward, standalone description.

Now, I just hope I didn't waste time for anyone who read this, because while I find it interesting, I haven't checked it out in practice; only done "back of the envelope" calculations and estimates to verify it indeed is interesting, and not just a random idea that popped into my mind.  Hopefully, the above explains what and why I found interesting in it.
« Last Edit: November 24, 2022, 02:53:37 pm by Nominal Animal »
 
The following users thanked this post: DiTBho

Online Marco

  • Super Contributor
  • ***
  • Posts: 6880
  • Country: nl
So how is this for an actual procedure?

Determine a 2^n block size the generator and 2^(n+1) the digitizer supports. Fill a spectrum with unity magnitude and pseudo-random phases (periodic random noise). Do real-iFFT, scale to fill the generator dynamic range. Generate and digitize, do real-FFT, ignore odd frequencies and subtract pseudo random phases.

The larger digitizer capture range gives some realistic wiggle room for delay, without a true steady state excitation.
« Last Edit: November 23, 2022, 09:08:13 pm by Marco »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6713
  • Country: fi
    • My home page and email address
So how is this for an actual procedure?
You'll find out how the device under test shapes white noise.

Thing is, you do not find out what the device does as a response to any particular frequency input signal.

A chirp –– a frequency sweep –– does this in a continuous manner, basically covering all frequencies within the sweep range.  It is more difficult to analyse, because of how the time domain and frequency domain are intertwined, but windowing short durations of the sweep, and doing an FFT on that window, gives a good compromise.

My single-frequency sine-squared-shaped pulse suggestion eliminates the windowing part, but at the cost of only examining a single input frequency, and having to compute quite large FFTs (whole signal at once).



Right now, I'm wondering if visual comparison between algebraic Fourier transforms of sinusoidal signals, and their sampled (discrete) FFTs, might be useful.  I could create a standalone HTML (similar to the FIR filter analysis page) that lets one define a signal in algebraic terms –– as a function ––, and have the page calculate both the algebraic Fourier transform, as well as the FFT of the same from the sampled function.  One is the ideal, the other simulates the real-world thing.  Looking at things like quantization (limiting to N-bit precision), sampling jitter, and various sampling delays (in the sub-sample range), might be quite illuminating.

I just don't really want to write an FFT in Javascript (specifically, for real data), and using an existing library would make it non-standalone (unless one includes the library source in the HTML).
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf