Author Topic: Pruning bits in digital downconverter (DDC)  (Read 40550 times)

0 Members and 2 Guests are viewing this topic.

Offline mtwiegTopic starter

  • Regular Contributor
  • *
  • Posts: 190
  • Country: us
Pruning bits in digital downconverter (DDC)
« on: April 01, 2024, 01:46:51 pm »
Was reviewing DSP and DDC design literature, and couldn't find answers to some basic questions...

Let's say I have to implement a direct-conversion DDC, like in the attached diagram. The ADC has 12 bits, and I mix it with a quadrature DDS/NCO (assume it also has 12 bits for now). I then want to decimate by R=64 with a CIC filter (second order). Ultimately I go from one 12 bit input to two 36 bit outputs. But most of those extra bits are meaningless; the only process gain I have is from my decimation factor which gives me log2(R)/2=3 extra bits, so at most I should only need 15 bits on the baseband outputs.

I could also implement something like Hogenauer pruning to reduce the resources for the CIC filters.

But even then, I feel like there's a lot of unnecessary fat which could be cut. The multiplier outputs are 24 bits, even though at that point there should only be 12 bits of relevant data at that point, right? This is where I was expecting my references to give some rules of thumb on choosing a resolution for the DDS, or on truncating the multiplier outputs, but I haven't found anything convincing, and the simple simulations I'm doing aren't giving a clear picture.

1. Can I just truncate the mixer outputs to the same number of bits as the ADC? My gut tells me yes, if I'm just going to truncate that many bits on the CIC output anyways.
2. What is the impact of DDS resolution on overall performance?
3. Even if I reduced things to 15 bits on each baseband output, that's still 30 bits total, despite those two signals effectively representing one complex signal with an ENOB of 15 or less bits. Still sounds like there's some waste there.
 
« Last Edit: April 01, 2024, 02:01:27 pm by mtwieg »
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3609
  • Country: ua
Re: Pruning bits in digital downconverter (DDC)
« Reply #1 on: April 02, 2024, 01:01:37 am »
But most of those extra bits are meaningless; the only process gain I have is from my decimation factor which gives me log2(R)/2=3 extra bits, so at most I should only need 15 bits on the baseband outputs.

No, they are not meaningless, they are determine your DSP dynamic range.
Noise floor and unwanted spurious artifacts level depends on these bits.

The multiplier outputs are 24 bits, even though at that point there should only be 12 bits of relevant data at that point, right?

No.

1. Can I just truncate the mixer outputs to the same number of bits as the ADC? My gut tells me yes, if I'm just going to truncate that many bits on the CIC output anyways.

No, you can't do that with no signal loss.

Let's say you have two int32_t variables, can you put multiplication result of these two variables into int32_t variable?
If you do that your signal will be clipped due to not enough resolution.

In practice you can truncate it depends on your dynamic range requirements.
For example, there is no big sense to have a big dynamic range with a poor filter, because anyway the signal will be flooded with images artifacts due to bad filtering quality.

For 12-bit ADC I'm using 32-bit DDS and mixer with 24-bit output and I do rounding for bit truncation. I still see some image artifacts, because my CIC+FIR filter has some gaps where filter rejection ratio is about 100 dB, but that gap is very small, so it don't add many artifacts. I want to get better, but it requires to replace CIC filter with something better. It will be nice to use FIR filter for 1024 taps at full ADC speed, but unfortunately FPGA resources are limited.

2. What is the impact of DDS resolution on overall performance?

DDS resolution defines level of unwanted spurious components and noise floor.
Higher DDS resolution leads to less noise, less spurs and less DSP artifacts.

3. Even if I reduced things to 15 bits on each baseband output, that's still 30 bits total, despite those two signals effectively representing one complex signal with an ENOB of 15 or less bits. Still sounds like there's some waste there.

There is no way to cut off bits with no loss signal quality.
Any bit reduction leads to higher noise and higher DSP artifacts.

Also note that simple bits truncating leads to additional noise due to rounding errors.
If you want to truncate bits, you're needs to do rounding and correct last remaining bit according to the value of bits which you're want to cut-off, it allows to reduce rounding error and as result minimize noise which you adding with truncating bits operation.

The bits resolution depends on your requirements for DSP quality and noise figure.
If you want to get the best signal quality on the output, you're needs to use as many bits as your hardware allows.  :)
« Last Edit: April 02, 2024, 01:38:08 am by radiolistener »
 

Offline BrianHG

  • Super Contributor
  • ***
  • Posts: 7867
  • Country: ca
Re: Pruning bits in digital downconverter (DDC)
« Reply #2 on: April 02, 2024, 01:26:45 am »
At your ADC input, was the source band filtered and put through an AGC stage?

This may allow you to shave bits as you know your output will be close the the maximum gain.

Again, if you are analyzing the source ADC input, like scanning for hidden signals throughout a really wide band, then you will want all the bits possible at your output to minimize noise.  Especially if you are trying to properly decode a super dense signal like 256QAM.  With something like 16QAM, even 24 bits could suffice with all other things like AGC on the analog front-end being implemented.

Now, how many DSP blocks are you really trying to save?
Most FPGAs will implement the same number of DSP blocks when going from 26 bits through 36 bits...
To really save on DSP blocks, you would need to drop to 18bit multiply-accumulate.
Or, if you only need a 50MHz sampling rate, you can run an 18x18 DSP block multiple times to attain 36 bits, but, you still need the 36 bits of registers and then some more for the accumulate.  You will also need to code your own functions.

Check your available setting for your DSP function blocks, I know Altera Quartus offers you a speed VS area parameter between 0 and 10 which can shave off DSP blocks at the expense of the top FMAX.  They also offer a pipeline function which can also affect the maximum logic cell VS #of DSP block tradeoff at various bit widths.

 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3805
  • Country: us
Re: Pruning bits in digital downconverter (DDC)
« Reply #3 on: April 02, 2024, 02:20:14 am »
But even then, I feel like there's a lot of unnecessary fat which could be cut. The multiplier outputs are 24 bits, even though at that point there should only be 12 bits of relevant data at that point, right? This is where I was expecting my references to give some rules of thumb on choosing a resolution for the DDS, or on truncating the multiplier outputs,

My rule of thumb is to not be stingy with bits :) 

Yes, your multiplier output only has about 12 significant bits.  But in most cases digital bits are cheap and actual signal is precious so you don't really want to push the limit here.  You don't  need the full 24 bits necessarily but maybe keep 16?  I would also use more than 12 bits on your mixer input if possible. In an FPGA both the multipliers and the ram blocks have fixed sizes, so there is no reason to economize unless it gets you past a magic number.

 

Offline radar_macgyver

  • Frequent Contributor
  • **
  • Posts: 711
  • Country: us
Re: Pruning bits in digital downconverter (DDC)
« Reply #4 on: April 02, 2024, 04:35:20 am »
When you truncate a word, you will introduce a bias of 0.5 LSB. This is not noticeable on its own, but if you cascade a low-pass filter after the truncation, the 0.5 LSB can become significant. It manifests as a DC bias at the output of your signal processing chain. If the baseband processing is tolerant of a DC bias, this can be ignored.

Xilinx DSP blocks as of the 6 series onwards provide hardware to implement rounding functions for almost free. There are several modes available, the symmetric rounding modes generate the least bias with the smallest resource overhead. The convergent modes don't have any benefit for SDR applications.
 

Offline mtwiegTopic starter

  • Regular Contributor
  • *
  • Posts: 190
  • Country: us
Re: Pruning bits in digital downconverter (DDC)
« Reply #5 on: April 02, 2024, 11:34:19 am »
Thanks all for the responses so far, I'll try to address all the major points:
There is no way to cut off bits with no loss signal quality.
Any bit reduction leads to higher noise and higher DSP artifacts.

...

The bits resolution depends on your requirements for DSP quality and noise figure.
If you want to get the best signal quality on the output, you're needs to use as many bits as your hardware allows.  :)
Was hoping for a more quantitative treatment of the subject. Obviously pruning any bits will have a nonzero affect on output dynamic range/SNR/etc. But I presume the impact is fairly negligible up to some point.

I should mention that the final output is going to be truncated to 16-24bits (32-48 for I and Q combined on each channel). This is not really under my control. I believe that this will be sufficient to not degrade maximum SNR. But I'm expected to back this up with something. And I'm also looking to reduce logic resource consumption, so if I'm going to throw away bits on the output, I was wondering if I could truncate earlier to reduce the CIC filters without impacting max SNR further (again, with some sort of analysis to back up the decision).

For example on the analog side you obviously want every stage in a receiver to have as low of a noise figure as possible, but later stages do not matter as much. The Friis noise formula can show exactly what the impact on noise figure is. I'm looking for similar analytical approaches for DDCs.

At your ADC input, was the source band filtered and put through an AGC stage?

This may allow you to shave bits as you know your output will be close the the maximum gain.
Certainly maximizing SNR and having an AAF at the ADC input helps. But I'm approaching this assuming that the ADC output is at the maximum possible SNR that the ADC can support. Thinking of the DDC as having an effective noise figure, how should I design it such that only reduces SNR by 1dB (or 0.1dB, or 3dB, or whatever my spec happens to be).

Now, how many DSP blocks are you really trying to save?
Most FPGAs will implement the same number of DSP blocks when going from 26 bits through 36 bits...
To really save on DSP blocks, you would need to drop to 18bit multiply-accumulate.
Or, if you only need a 50MHz sampling rate, you can run an 18x18 DSP block multiple times to attain 36 bits, but, you still need the 36 bits of registers and then some more for the accumulate.  You will also need to code your own functions.

Check your available setting for your DSP function blocks, I know Altera Quartus offers you a speed VS area parameter between 0 and 10 which can shave off DSP blocks at the expense of the top FMAX.  They also offer a pipeline function which can also affect the maximum logic cell VS #of DSP block tradeoff at various bit widths.
Yes, you're correct especially regarding multipliers. But since the only multipliers are the mixers, and the CIC filters end up being implemented using general LEs, the design is constrained by available logic resources, not DSP blocks. So reducing the CIC filter bits would help a lot.

Yes, your multiplier output only has about 12 significant bits.  But in most cases digital bits are cheap and actual signal is precious so you don't really want to push the limit here.  You don't  need the full 24 bits necessarily but maybe keep 16?  I would also use more than 12 bits on your mixer input if possible.
Right, I would never cut things so close to the bone, allowing two or three bits of growth for the mixer and CIC sounds reasonable. Just looking for a less touchy-feely approach to this.
Quote
In an FPGA both the multipliers and the ram blocks have fixed sizes, so there is no reason to economize unless it gets you past a magic number.
Currently there is no "magic number" aside from the multipliers, which we're not close to hitting.

When you truncate a word, you will introduce a bias of 0.5 LSB. This is not noticeable on its own, but if you cascade a low-pass filter after the truncation, the 0.5 LSB can become significant. It manifests as a DC bias at the output of your signal processing chain. If the baseband processing is tolerant of a DC bias, this can be ignored.

Xilinx DSP blocks as of the 6 series onwards provide hardware to implement rounding functions for almost free. There are several modes available, the symmetric rounding modes generate the least bias with the smallest resource overhead. The convergent modes don't have any benefit for SDR applications.
Thanks for mentioning this. In my application DC offset would definitely be a problem (if it's above the noise floor, that is). But I assume the impact would depend a lot on where the truncation happens (before vs after vs between CIC/lowpass filters, for example). How does one estimate the actual impact?
« Last Edit: April 02, 2024, 12:19:49 pm by mtwieg »
 

Offline radar_macgyver

  • Frequent Contributor
  • **
  • Posts: 711
  • Country: us
Re: Pruning bits in digital downconverter (DDC)
« Reply #6 on: April 02, 2024, 01:30:40 pm »
Each filter stage is a low-pass with some fractional bandwidth, let's say B. The noise power at the output of the filter is reduced by 10log10(fs/B). If the input to this filter stage was truncated, this introduces a 0.5 LSB bias. After filtering, this will show up at the output above the quantization noise floor due to the 10log10(fs/B) decrease in noise power.

I estimated the number of bits to preserve by starting with the ADC ENOB, and adding the processing gain for each filtering stage (in the case of filters with programmable decimation, use the largest decimation rate allowed). I then round one or two bits past this. As an example (from my notes on a previous implementation), I started with an ADS5485 at 12.2 ENOB (73 dBFS) followed by a CIC filter with N=10. The filtering decreases the noise floor by 10log10(10) = 10 dB, so the ADC ENOB gets reduced to 83 dBFS. This requires 83/6 = 14 bits to represent, so that's the upper bound of how much I can round the filter output.

I use Simulink to model the behavior of a DDC. It supports fixed-point math with arbitrary bit depths, so I can easily change the bit depth at various points by applying rounding and observe the output SNR. I have an identical processing chain implemented in floating point, and compare the output spectra between the two to determine where the fixed point implementation starts to deviate as I change the bit depths at various points. Finally, I do a power sweep of the simulation models and plotted the deviation from zero gain vs. input power. As the signals approach the quantization noise floor, one expects the quantization noise from each rounding step to contribute to the output noise; this is why one would not use just 14 bits in the example above, but rather 16 or 17.

Additionally, by this logic, one would not need to preserve any additional bits after the mixer/NCO stage. In the example above, the ADS5485 is a 16-bit ADC and I use a 16-bit NCO. I round the output of the NCO mixer to 16 bits, since the resulting SNR is well below the 73 dBFS noise floor of the ADC, which remains unchanged through this stage since there's no band limiting.

It's been a while since I did this and I do remember it being very slow. I did a quick search for fixed-point Python libraries and came across this SO post:
https://dsp.stackexchange.com/questions/67945/friendliest-python-library-for-fixed-point-algorithm-simulation

edit: I round the mixer output, not NCO. Fixed in text above.
« Last Edit: April 03, 2024, 12:19:30 pm by radar_macgyver »
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3609
  • Country: ua
Re: Pruning bits in digital downconverter (DDC)
« Reply #7 on: April 02, 2024, 03:15:46 pm »
For example on the analog side you obviously want every stage in a receiver to have as low of a noise figure as possible, but later stages do not matter as much.

Exactly the same things for DSP. If you destroying signal SNR on early stage, there is no way to recover it on later stages. There is no magic.

I was wondering if I could truncate earlier to reduce the CIC filters without impacting max SNR further (again, with some sort of analysis to back up the decision).

Bit truncation is SNR reduction, this is the same as adding more noise and spurs to the signal.
It means that you're applying some degree of signal destroying.
And you will not be able to recover signal SNR anymore with further filtering.

That's the reason to keep as many bits as possible and truncate it on the later stage.
It helps to keep signal SNR. And how much SNR is required depends on your needs.

There is no way to truncate bits with no adding quantization noise to the signal.
Just try it in Matlab or Octave for some signal like sine and you will see that the bit truncation is not noise free operation.

But if your signal is already has a high noise, there may be no sense to keep high SNR for processing it and you can save some FPGA resources. It depends on your needs.

Thanks for mentioning this. In my application DC offset would definitely be a problem (if it's above the noise floor, that is). But I assume the impact would depend a lot on where the truncation happens (before vs after vs between CIC/lowpass filters, for example). How does one estimate the actual impact?

You're needs to implement rounding at any place where you truncate bits.
Simple cut-off bits not only adding bias offset and increase quantization noise, but also adding noise due to rounding errors.
« Last Edit: April 02, 2024, 03:59:50 pm by radiolistener »
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3609
  • Country: ua
Re: Pruning bits in digital downconverter (DDC)
« Reply #8 on: April 02, 2024, 04:19:07 pm »
Additionally, by this logic, one would not need to preserve any additional bits after the mixer/NCO stage. In the example above, the ADS5485 is a 16-bit ADC and I use a 16-bit NCO. I round the output of the NCO to 16 bits, since the resulting SNR is well below the 73 dBFS noise floor of the ADC, which remains unchanged through this stage since there's no band limiting.

I'm using 14-bit ADC and when I tried to truncate mixer output to 16 bit (with rounding) I was able to see many unwanted high spurs from NCO. It become visible when you applying low pass filter which reduces noise floor and all that noise become visible. So I tried to use 32-bit NCO and 24-bit mixer output (with rounding) and these spurs and noises go below -160 dBFS which is acceptable for me.
« Last Edit: April 02, 2024, 04:22:27 pm by radiolistener »
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3609
  • Country: ua
Re: Pruning bits in digital downconverter (DDC)
« Reply #9 on: April 02, 2024, 04:26:43 pm »
Yes, you're correct especially regarding multipliers. But since the only multipliers are the mixers, and the CIC filters end up being implemented using general LEs, the design is constrained by available logic resources, not DSP blocks. So reducing the CIC filter bits would help a lot.

CIC filter eats really a lot of bits, because it needs much more resolution for accumulators. In my case it uses up to 80-90 bits fixed point.

But you're still needs multipliers for FIR filter after CIC, because you're needs to apply CIC compensation. Without compensation FIR filter the output response will not be flat.
« Last Edit: April 02, 2024, 04:31:12 pm by radiolistener »
 

Offline radar_macgyver

  • Frequent Contributor
  • **
  • Posts: 711
  • Country: us
Re: Pruning bits in digital downconverter (DDC)
« Reply #10 on: April 03, 2024, 12:44:57 pm »
I'm using 14-bit ADC and when I tried to truncate mixer output to 16 bit (with rounding) I was able to see many unwanted high spurs from NCO. It become visible when you applying low pass filter which reduces noise floor and all that noise become visible. So I tried to use 32-bit NCO and 24-bit mixer output (with rounding) and these spurs and noises go below -160 dBFS which is acceptable for me.
In my tests, NCO spurs were related to the phase accumulator width, not the output width. I use a 32-bit tuning word and phase accumulator due to the tuning resolution required for my application. I applied a Taylor-series correction to the DDS to improve SFDR. Perhaps I got lucky because my application is relatively narrow-band and the spurs fell out of band, or were below the thermal noise floor of the ADC. I verified this using frequency sweeps fed into the ADC, to verify the bandpass response of the receiver.
 

Offline mtwiegTopic starter

  • Regular Contributor
  • *
  • Posts: 190
  • Country: us
Re: Pruning bits in digital downconverter (DDC)
« Reply #11 on: April 04, 2024, 12:41:50 pm »
Each filter stage is a low-pass with some fractional bandwidth, let's say B. The noise power at the output of the filter is reduced by 10log10(fs/B). If the input to this filter stage was truncated, this introduces a 0.5 LSB bias.
I did some tinkering in python scripts to look at this, and like you suggest truncating a bit introduces a 0.5 LSB offset in the expectation of the output. If the truncation is just eliminating bits, then pruning additional bits makes the result worse (1.5LSB for two bits, 3.5LSB for three bits, etc). But if the truncation is done via rounding, then the effect seems to be limited to just 0.5LSB regardless of the number of bits truncated. Does that make sense?

I estimated the number of bits to preserve by starting with the ADC ENOB, and adding the processing gain for each filtering stage (in the case of filters with programmable decimation, use the largest decimation rate allowed). I then round one or two bits past this. As an example (from my notes on a previous implementation), I started with an ADS5485 at 12.2 ENOB (73 dBFS) followed by a CIC filter with N=10. The filtering decreases the noise floor by 10log10(10) = 10 dB, so the ADC ENOB gets reduced to 83 dBFS. This requires 83/6 = 14 bits to represent, so that's the upper bound of how much I can round the filter output.

I use Simulink to model the behavior of a DDC. It supports fixed-point math with arbitrary bit depths, so I can easily change the bit depth at various points by applying rounding and observe the output SNR. I have an identical processing chain implemented in floating point, and compare the output spectra between the two to determine where the fixed point implementation starts to deviate as I change the bit depths at various points. Finally, I do a power sweep of the simulation models and plotted the deviation from zero gain vs. input power. As the signals approach the quantization noise floor, one expects the quantization noise from each rounding step to contribute to the output noise; this is why one would not use just 14 bits in the example above, but rather 16 or 17.

Additionally, by this logic, one would not need to preserve any additional bits after the mixer/NCO stage. In the example above, the ADS5485 is a 16-bit ADC and I use a 16-bit NCO. I round the output of the NCO mixer to 16 bits, since the resulting SNR is well below the 73 dBFS noise floor of the ADC, which remains unchanged through this stage since there's no band limiting.
This all sounds very reasonable to me (which I still had access to Matlab/simulink for this).

Exactly the same things for DSP. If you destroying signal SNR on early stage, there is no way to recover it on later stages. There is no magic.

...

Bit truncation is SNR reduction, this is the same as adding more noise and spurs to the signal.
It means that you're applying some degree of signal destroying.
And you will not be able to recover signal SNR anymore with further filtering.

...

That's the reason to keep as many bits as possible and truncate it on the later stage.
It helps to keep signal SNR. And how much SNR is required depends on your needs.

...

There is no way to truncate bits with no adding quantization noise to the signal.
Just try it in Matlab or Octave for some signal like sine and you will see that the bit truncation is not noise free operation.
Agreed, it's a given that rounding/truncation will degrade SNR/DRR by some non-zero amount. The question is how to estimate the degradation.

You're needs to implement rounding at any place where you truncate bits.
Simple cut-off bits not only adding bias offset and increase quantization noise, but also adding noise due to rounding errors.
I've observed that when truncating two or more bits at once, rounding produces less DC bias than simple truncation. Does rounding also have benefits for SNR as well?

In my tests, NCO spurs were related to the phase accumulator width, not the output width. I use a 32-bit tuning word and phase accumulator due to the tuning resolution required for my application. I applied a Taylor-series correction to the DDS to improve SFDR. Perhaps I got lucky because my application is relatively narrow-band and the spurs fell out of band, or were below the thermal noise floor of the ADC. I verified this using frequency sweeps fed into the ADC, to verify the bandpass response of the receiver.
Regarding DDS/NCO bits, I was also referring to output data width, not phase accumulator bits (completely different can of worms). My impression is that the impact of DDS/NCO width is not straightforward at all, and is very application-specific. And like you mention there are different truncation/compression methods to consider. Fortunately in my application I have no reason to trim the DDS width since my hardware multipliers already accommodate more bits than the ADC has.
« Last Edit: April 04, 2024, 01:04:31 pm by mtwieg »
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3609
  • Country: ua
Re: Pruning bits in digital downconverter (DDC)
« Reply #12 on: April 04, 2024, 01:05:38 pm »
Agreed, it's a given that rounding/truncation will degrade SNR/DRR by some non-zero amount. The question is how to estimate the degradation.

Worst case noise can be estimated with SNR = N*6.02 + 1.76 dB

Without rounding you're needs to use N-1 instead of N.

But note that this is worst case, actual noise depends on signal and bit truncation can lead to pretty significant degradation despite the fact that it will be still fit within worse case N*6.02 + 1.76

« Last Edit: April 04, 2024, 01:07:36 pm by radiolistener »
 

Offline mtwiegTopic starter

  • Regular Contributor
  • *
  • Posts: 190
  • Country: us
Re: Pruning bits in digital downconverter (DDC)
« Reply #13 on: April 04, 2024, 01:20:00 pm »
Agreed, it's a given that rounding/truncation will degrade SNR/DRR by some non-zero amount. The question is how to estimate the degradation.

Worst case noise can be estimated with SNR = N*6.02 + 1.76 dB

Without rounding you're needs to use N-1 instead of N.

But note that this is worst case, actual noise depends on signal and bit truncation can lead to pretty significant degradation despite the fact that it will be still fit within worse case N*6.02 + 1.76
That equation is for the full-scale SNR for an ideal ADC with N bits. I'm certainly not suggesting to prune bits on the raw ADC output.

Certainly increasing N by padding doesn't increase SNR... So what does this equation have to do with pruning bits?
 

Offline mtwiegTopic starter

  • Regular Contributor
  • *
  • Posts: 190
  • Country: us
Re: Pruning bits in digital downconverter (DDC)
« Reply #14 on: April 04, 2024, 01:27:44 pm »
For anyone interested, here's my little python script for simulating bit truncation:
Code: [Select]
import numpy as np

# test truncation/rounding on a sequence of numbers with random values
divs = [2, 4, 8, 16]    # test with these divisors
methods = ['ceil', 'round']    # test with these truncation methods
Xlength = 128   # length of each sequence
Niter = 32  # number of iterations to perform, each on a different X
Xrms = 64   # RMS of noise X
# pre allocate saved data
Xall = np.zeros((Niter, Xlength))
Yall = np.zeros_like(Xall)
Xmean = np.zeros((Niter, 1))
Ymean = np.zeros_like(Xmean)

# generate Niter random X sequences
for idx in range(Niter):
    X = np.random.normal(loc=0.0, scale=Xrms, size=Xlength)
    X = np.round(X)
    Xall[idx, :] = X + 4 * Xrms     # offset so X is always positive

for div in divs:
    for method in methods:
        for idx in range(Niter):
            # get pseudorandom X
            X = np.squeeze(Xall[idx, :]).copy()
            # choose truncation method and calculate output Y
            if method == 'floor':
                Y = np.floor(X/div)*div
            elif method == 'ceil':
                Y = np.ceil(X/div)*div
            elif method == 'round':
                # add a tiny bit to X before rounding, to avoid wrong result due to float precision
                Y = np.round(X/div+Xrms*1e-6)*div
            # save Y to Yall
            Yall[idx, :] = Y
            # save mean of X and Y
            Xmean[idx] = np.mean(X)
            Ymean[idx] = np.mean(Y)

        # get mean error Emean (the "DC bias")
        Emean = np.mean(Ymean-Xmean)
        # AC RMS of Y (ignores DC component)
        Yacrms = (np.mean((Yall-np.mean(Ymean))**2))**0.5
        # get the AC RMS error between Y and X (ignores DC bias)
        Eacrms = (np.mean((Yall-Emean-Xall)**2))**0.5
        # print results
        print('div={0}, method={1}, Emean={2:.3f}, Yacrms={3:.3f}, Eacrms={4:.3f}'.format(div, method, Emean, Yacrms, Eacrms))

print('done')

Here's the output:
Code: [Select]
div=2, method=ceil, Emean=0.509, Yacrms=64.009, Eacrms=0.500
div=2, method=round, Emean=0.509, Yacrms=64.009, Eacrms=0.500
div=4, method=ceil, Emean=1.505, Yacrms=64.031, Eacrms=1.117
div=4, method=round, Emean=0.493, Yacrms=64.016, Eacrms=1.116
div=8, method=ceil, Emean=3.455, Yacrms=64.132, Eacrms=2.271
div=8, method=round, Emean=0.562, Yacrms=63.970, Eacrms=2.283
div=16, method=ceil, Emean=7.477, Yacrms=64.255, Eacrms=4.559
div=16, method=round, Emean=0.383, Yacrms=64.258, Eacrms=4.659
done
So the benefit of rounding vs ceil/floor truncation is clear when looking at Emean (the DC bias). The truncation method doesn't seem to impact the output noise level at all (Yacrms).

Note that these results are with the input noise level well above the quantization threshold (64LSB in this case), as would be expected at the output of an un-optimized decimation filter.
 

Offline jahonen

  • Super Contributor
  • ***
  • Posts: 1054
  • Country: fi
Re: Pruning bits in digital downconverter (DDC)
« Reply #15 on: April 04, 2024, 05:25:35 pm »
For removing the DC offset when rounding, you can also use convergent rounding, discussed here: https://zipcpu.com/dsp/2017/07/22/rounding.html

I translated that verilog IMO rather cryptic expression piece to a VHDL function a while ago, and it basically just adds 0.5 to half of the numbers to be rounded and 0.4999... to other half, causing half of them to round up, and rounding half of them round down. That removes the DC offset.

Regards,
Janne
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3609
  • Country: ua
Re: Pruning bits in digital downconverter (DDC)
« Reply #16 on: April 04, 2024, 10:12:58 pm »
Certainly increasing N by padding doesn't increase SNR... So what does this equation have to do with pruning bits?

It shows how noise is increased with bit truncation. Once you truncate bits, there is no way to restore original signal, adding empty bits don't help you, because noise will remains the same

For removing the DC offset when rounding, you can also use convergent rounding, discussed here: https://zipcpu.com/dsp/2017/07/22/rounding.html

yes, I'm using rounding from that article, it works good
« Last Edit: April 04, 2024, 10:16:30 pm by radiolistener »
 

Offline mtwiegTopic starter

  • Regular Contributor
  • *
  • Posts: 190
  • Country: us
Re: Pruning bits in digital downconverter (DDC)
« Reply #17 on: April 05, 2024, 11:34:31 am »
For removing the DC offset when rounding, you can also use convergent rounding, discussed here: https://zipcpu.com/dsp/2017/07/22/rounding.html
This is interesting, thanks!

It shows how noise is increased with bit truncation.
No, it shows how the maximum theoretical SNR of a signal represented by N bits depends on N. It does not describe the effect of truncation on the SNR of actual signals (unless the signal happens to have the maximum theoretical SNR).

The 37 bit baseband outputs in my example diagram obviously won't have 224dB of SNR. So truncating a bit from those outputs will not decrease SNR by 6.02dB, or anywhere close to that. The effect is probably not zero either. But I'm not aware of an analytical method of estimating the effect.
« Last Edit: April 05, 2024, 11:48:05 am by mtwieg »
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3609
  • Country: ua
Re: Pruning bits in digital downconverter (DDC)
« Reply #18 on: April 05, 2024, 01:54:39 pm »
No, it shows how the maximum theoretical SNR of a signal represented by N bits depends on N. It does not describe the effect of truncation on the SNR of actual signals (unless the signal happens to have the maximum theoretical SNR).

The maximum theoretical SNR depends on worst case quantization noise. This formula is derived from the worst case noise power equation. It shows worst case noise floor for all kind of signal within full bandwidth of ADC.

The 37 bit baseband outputs in my example diagram obviously won't have 224dB of SNR. So truncating a bit from those outputs will not decrease SNR by 6.02dB, or anywhere close to that. The effect is probably not zero either. But I'm not aware of an analytical method of estimating the effect.

SNR shows worst possible dynamic range for all possible signals. But for many kind of signals noise floor will be much less than -SNR. For example DC signal has -infinite noise floor. The same, some specific sine signal may have -infinite noise floor, it happens when sampled points well fits with sample aperture.

Also note, that SNR is specified for entire bandwidth and you can reduce noise floor much below than theoretical SNR of ADC, just by applying lowpass filter, because it will cut-off part of noise power spreaded across entire bandwidth. This way you can see for signals much more weak than ADC SNR, but with reduced bandwidth.

This is named processing gain and can be estimated as:

Processing gain =  10 * log10( original_bandwidth / target_bandwidth ) [dB]

For example, if your 12-bit ADC has Nyquist bandwidth 100 MHz, its theoretical SNR can be estimated as

SNR = 12 * 6.02 + 1.76 = 74 [dB]

But that SNR is for entire 100 MHz bandwidth. And if you apply low pass filter with cut-off 1 Hz you will get processing gain:

Processing gain = 10 * log10(100000000 / 1) = 80 [dB]

It means that you will have SNR = 74+80 = 154 [dB] for 1 Hz bandwidth.

It means that you can see -224 dB signals (and more weak) with 12-bit ADC by reducing bandwidth, but if you do bit truncation you will flood original signal with quantization noise and information about these weak signal may be lost at all. With no way to recover it, because padding sample with empty bits don't restore that information about signal and the noise added with bit truncation will remains the same even if you add empty bits to the truncated sample.

Using low resolution mixer has the same effect - it will add unwanted quantization noise to the signal, because NCO sine will have noise. And this NCO noise is not related with quantization noise of original signal taken from ADC, this is different noise which you add to the signal. The bad thing is that NCO noise is not white, it has many spurs which depends on NCO freuqnecy and will be added to your signal taken from ADC.

And if you don't do rounding, it will add even more noise on every stage where you do bit truncation.
« Last Edit: April 05, 2024, 02:37:42 pm by radiolistener »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf