Author Topic: Shannon and Nyquist not necessarily necessary ?  (Read 6483 times)

0 Members and 1 Guest are viewing this topic.

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6783
  • Country: nl
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #25 on: November 25, 2017, 02:39:23 pm »
No sleight of hand.  It is not guaranteed to always work.

What I mean with slight of hand is that they never explain where those samples come from. In none of the papers it seems to come directly from the MRI machine, it's all simulation. It's trivial to sample the K-space multiple times in slightly offset grids, which will help shape aliasing noise, but that's not really compressive sensing now is it.

Quote
There are now a slew of very intense graduate level monographs on the subject.  I strongly recommend "A Mathematical Introduction to Compressive Sensing" by Foucart and Rauhut for anyone who has the stomach for heavy math.  So far as I know it was the first, appearing in 2013.

It's not the math which concerns me, I see no description of the RAW data being irregularly sampled from factual machines ... just assumptions that you could somehow quickly get irregularly sampled data, with no description how that would actually result from the MRI scan.

Quote
As for the TI product,  RTFM. They don't explain it

They don't really have to explain it, it's presented as an alternative to simple scanning of the mirrors in the array. So it's clear that instead of scanning the mirrors one by one to get the spectrum, they put the hadamard basis functions on the mirror display and one by one. That allows you to subsample in the Hadamard domain and get some quick "low resolution" results (truncating the Hadamard transform doesn't quite correspond to what you'd expect from a lower resolution, but it works approximately).

PS. I guess for MRI they could concentrate scans in the low frequency range of the frequency domain, but only along one dimension and still just a summation of regular grids ... not close to the complex sampling patterns they tend to describe in the compressive sensing theory, but might be useful.
« Last Edit: November 25, 2017, 03:09:16 pm by Marco »
 

Offline chris_leyson

  • Super Contributor
  • ***
  • Posts: 1541
  • Country: wales
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #26 on: November 25, 2017, 06:31:52 pm »
Interesting stuff it's a shame they jump into some pretty heavy maths straight away. I can see how non-uniform under sampling avoids aliasing of uniform spatial components and hence Nyquist would no longer apply. Pseudo random undersampling, it's just like spread spectrum but that's only part of it.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3484
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #27 on: November 25, 2017, 11:55:46 pm »

In no particular order:

These papers are math papers.  There is later work related to processing randomly sampled data. Until someone has shown that random sampling works, no one will build a machine that does that.  The Lena example I was shown was regularly sampled data.  Half was discarded and the recovered image looked awful.  Then 80% of what was left was randomly discarded.  The recovered image was better than the image recovered  using 1/2 the data.  With the right algorithm, 10% of the data looks better than 50% of the data.

As for how you get random samples, just sample at random times.  The key concept is that the sampling is not correlated with anything in the sampled signal.  Aliasing results from correlations between the sampling operator and the signal.  Physically changing the speed of an MRI sensor is not practical.  You keep the speed of the sensor constant and vary the timing of the ADC sampling as the sensor revolves around the  patient.

I waded through Foucart and Rauhut twice before I read Donoho and Candes.  So for me the papers were a relief.

It is *very* closely related to spread spectrum.  But the *really* big deal is the proof that L1 == L0 for most underdetermined problems with sparse solutions.  That is an example of a fast solution to an NP-Hard problem.  Very big deal.

The single pixel camera is the ultimate "low resolution" scan.

FWIW It took about 3 years and 3000 pages of text for me to get my head around the ideas.  I rank it as the most important work in applied math since Wiener, Shannon et al.

I think it's cool stuff and I'm delighted to find people who understand the concepts.  I got in a tiff with Ciarcia of "Circuit Cellar" when he published an ADC project with no anti-alias filtering and told me that it was "engineering practice".  I did not renew.
 

Online nctnico

  • Super Contributor
  • ***
  • Posts: 27280
  • Country: nl
    • NCT Developments
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #28 on: November 26, 2017, 12:21:32 am »
Let's see if I can get a handle on it. From the short presentation in the first post I understand that it works if you know roughly what the signal looks like but don't know the magnitude. So if you have a few samples you can work out the magnitude of the signal you sampled. Stretch the idea a bit further and you should also be able to deal with solving variations in the time domain as well.
« Last Edit: November 26, 2017, 12:26:37 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline rhb

  • Super Contributor
  • ***
  • Posts: 3484
  • Country: us
Re: Shannon and Nyquist not necessarily necessary ?
« Reply #29 on: November 26, 2017, 03:09:45 pm »
It works *if and only if* the signal is sparse in some transform domain. Hopefully what follows is a better description of evaluating the products of a mixer.

The products are known to be sparse.  Solve Ax=y where y is sampled in the time domain at random times.  A is a Fourier basis evaluated at those times.  Each row of A contains all the frequencies in the Fourier transform series evaluated at t(j).  The elements of the vector y are amplitudes samples at random times t(j).  The elements of x are the unknown coefficients of the discrete Fourier transform of the signal.

The requirement for a successful solution is that the columns of A have low coherency.  The crosscorrelation between columns must be  small.  An L1 solution *is* the optimal (i.e. L0) solution.  The first paper by Donoho that I cited proves this.  The L0 case is NP-Hard.  The L1 solution can be obtained by standard linear programming methods. Most elements of x will be zero.  The non-zero terms will be the amplitudes of the respective frequency components.  Typically one would use the frequencies of the FFT for the frequencies of x.  That way one recovers the FFT of the input signal. However, one could just as easily have a wavelet transform basis.

Using L1, the  presence of broadband noise doesn't matter so long as it is low amplitude relative to the signal.  That is *not* the case using the traditional Gm=d L2 (least squared error) solution.  Shannon still applies, but the relevant criterion is the information; how sparse is the signal in some transform domain.

Shannon solved the continuum case using calculus before Wiener presented the discrete case.  However, if you consider Shannon in the form of a sum of bandwidth limited terms one should arrive at the same result.  That's speculation on my part, but it seems unlikely that both Shannon and Donoho could be correct unless it is the case.

The actual sampling requirement is much more complex than Shannon's result.  So it's a bit of an abuse to reference Shannon, but I think it appropriate because it is an information theory concept.  The proper mathematical treatment of sampling required is *very* painful to read.  The corect treatment involves the properties of the A matrix.  Determining those properties is at least as hard as solving the problem.  A good rule of thumb is you need 10-20% of the sampling needed if one regularly sampled at Nyquist rates.

There's a lot more to this than just sampling.  L1 basis pursuit is the solution to matrix completion (aka the Netflix problem), inverse problems (what I was doing when I stumbled into this), blind source separation (listen in on table 12 with a small number of microphones scattered around the room),  identifying genome alleles responsible for some trait and a slew of other things.  Not surprisingly, there were a number of people applying the technique before Donoho and Candes put it on a firm mathematical footing much as Heaviside was using operational mathematics before the mathematicians explained why it worked.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf