Author Topic: Bruker NMS120 benchtop NMR teardown  (Read 2388 times)

0 Members and 1 Guest are viewing this topic.

Offline D StraneyTopic starter

  • Regular Contributor
  • *
  • Posts: 230
  • Country: us
Bruker NMS120 benchtop NMR teardown
« on: August 08, 2021, 04:27:01 pm »
Figured this would be the best place to post this teardown, as it mostly involves RF; anyways...there was an old benchtop NMR analyzer getting trashed a while back at work, and I decided to save some of the electronics in case there was anything interesting (spoiler: there was).  The section with the permanent magnets and coils had already been salvaged, so won't be looking at those.

Nuclear Magnetic Resonance (NMR) Basics

For those not familiar, NMR is about analyzing things using the principle of atomic spins, which create magnetic dipoles.  It's easy to imagine with electron orbits: an orbiting electron is essentially a circulating current which creates a magnetic field just like a loop of wire does.  Take your right hand and visualize it - easy, right?  ...except that nuclei, which don't have an orbit of any kind in the classical model of the atom, also have spin, and so everything goes off into unintuitive quantum physics territory very quickly once you start looking at the details.

Anyways, once you accept that spin is a thing even if you'll never quite understand why, you put a sample of some material you want to analyze in a strong and very uniform static magnetic field.  The static field aligns some of those atomic magnetic vectors in the same direction, and sets their natural resonant frequency, like tightening a guitar string to set the pitch.  If you use a coil to create an oscillating magnetic field at a right angle to the static field near this same frequency, it'll poke those nuclear magnetic vectors and tip them off axis.  Afterwards, as they get pulled back towards their original orientation under the spring-like pull of the static magnetic field, they continue spinning in a damped circular motion, creating a very small oscillating magnetic field that induces a voltage in the coil.  So generate an RF pulse and drive a tuned coil with it: when the pulse is done, receive and analyze a tiny RF pulse from that same coil, generated by the sample as it "rings down".  Useful systems get far more complicated than this, and different nuclei have different resonant-frequency-to-magnetic-field ratios, and I'm not going to touch on gradients because MR physics isn't really the point here, but that's the basic idea of what's going on in one of these instruments.

Digital Boards

Let's get the less interesting stuff out of the way first: there's a CPU card and LCD-driver card, for running the onboard firmware and displaying results.  The firmware likely does the timing for the pulse sequence (essentially an RF-pulse-generation program) and analyzes the results from the received signals.




We can see the 68020 CPU / 68881 floating-point coprocessor with a large bank of RAM (maybe using the AM4701 parity calculator?), and various peripherals: the MC68230 is a "parallel interface and timer", Z0853606 is a "counter/timer", AM85C30 is a serial interface, D72068 is a floppy drive controller maybe, and the big blocky one is an obvious real-time clock, but can't find anything about the Emulex 2400010 except that it was apparently designed into some old military electronics and is now difficult to source.




RF Modules

The drive for the RF output pulses is done by this power amplifier (RFPA):

You can see the 3 stages here.  The signal first gets boosted by an integrated-hybrid power amp, which then drives the two beefy discrete-transistor output stages.

RFPAs aren't something I specialize in and I don't know enough about these particular devices to really comment much on the circuit details, but the matching etc. has a narrow-band "feel" to it, like with the series-LC feeding the base(?)/gate(?) of the final stage.  This is meaningful as hydrogen (resonant frequency ~43 Mhz per T of static field) is by far the most common spin to poke and watch, since water is in everything, but carbon-13 (~11 Mhz/T) and others are popular as well.  So if this is actually a narrow-band PA, it would show that this analyzer can only look at one type of nucleus directly, but gets its info through other effects like the various pulse decay times (which tells you about the greater molecular structure that the hydrogen is in) and chemical shift (which affects resonant frequency on a single-digit-ppm scale).

Again, don't know enough to speak confidently about details but I also wouldn't be surprised if the final stage was biased as class-C to save power.  A convenient thing about NMR or MRI is that the physics are inherently frequency-selective; the RF coils are also tuned to a narrow resonance frequency, and therefore you don't have to worry as much as about distortion as you would in many other applications.  This especially applies if you're dealing with hydrogen, which I believe has the highest of all atomic resonant frequencies: any harmonics which do manage to get through your tuned coil aren't going to do much to the atomic spins.  I've seen one 2 kW RFPA specifically made for MRI put some nasty very-square-looking waveforms into a resistive dummy load.  With the normal low duty cycle of short RF pulses used in NMR, it's also not unreasonable to use class-A or B PAs that only get biased on for short times while transmitting, but I don't see any "unblank" controls here, just DC power.

For receiving the RF pulses, there's a variable-gain low-noise amplifier (LNA) as well:


Under the shield-within-a-shield is the first stage, the lowest-noise of all:

After that, there are multiple chained amplification stages, with a Mini-Circuits variable attenuator (in the metal can) at one point:



This seems to implement an effective variable gain, likely controlled by the AD7543 multiplying DAC in the control section:


A/D and D/A Conversion (the best part)

Finally, where are the RF drive signals coming from?  And where are the boosted RF receive signals going to?  The RF-to-CPU interfacing is handled by this RF conversion card:


We can see from the labels here that both the RF output (to the RFPA) and input (from the LNA) come from/go to here.

The analog-to-digital conversion of the received signals is done by the Datel ADC-307-1 near the upper left, in the purple ceramic package.  This flash ADC chip was able to do 125 Msps @ 8 bits, which honestly must've been some pretty hot shit for the late 90s.  Unless the components above it are a very minimal downconverting mixer (which wouldn't make much sense in this context), it must've directly sampled the RF - I believe the magnet strength on this system gave it a Larmor frequency in the ballpark of 15 Mhz, which means that 80 Mhz clock visible in a metal can above the ADC would give an ok oversampling ~2.5x ratio (especially since the LNA stages are likely tuned and so you're not expecting much out-of-band noise, to get aliased).  The ADC's clock input and 8 parallel (!) outputs were all ECL: today this wouldn't be a big deal to implement on-chip with some tiny low-voltage super-fast CMOS but at the time the 74F and 74ALS families couldn't keep up very well with a 125 Mhz clock (~4 ns delays = 1/2 of a 125 Mhz clock period), and 74AC and 74HC families were progressively even slower.  The 74AS family actually had much smaller gate delays (more like 2 ns) that would've been fine...but because a flip-flop, as required internally to latch the ADC's readings, was significantly slower than a single logic gate (the SN74AS74A D-FF shows 105 Mhz max. toggle rate), this likely would've also been marginal at best.  So, ECL and negative supply voltages it is.

Dual MC10125s below the ADC convert the ECL data outputs to LS TTL (they have a ~5 ns delay, but this works ok for 80 Mhz), so that the digital output can be piped into the IDT 7204 4Kx9 FIFO memories below them as the "sample memory", storing the received ADC data during the chemical sample's "ring-down" time, for slower readout by the main processor afterwards.  This is the same "triggered fast write, slow read" scheme used in a modern DSO.  16K samples (4K each x 4 FIFOs) @ 80 Msps = ~200 µs of continuous sample data that can be stored; that's actually less than I would've expected, as MR decay times are usually measured in ms.  I'm not very familiar with non-imaging MR applications, though, so maybe for MR spectroscopy you really don't need to capture a full FID or echo.

I had trouble for a while figuring out how pumping the high-speed data into the receive FIFOs actually worked correctly: the "L25P" suffix on these FIFO part numbers specifies 25 ns access time / 35 ns write cycle time, and according to the datasheet they could only sustain a max. write frequency of 29 Mhz, less than half the ADC clock frequency.  Even the fastest "L12" variant could only get up to 50 Mhz.  The 4 FIFOs could be time-interleaved to work around this, so that each one is only being written to at 20 Mhz - there are two of the previously-mentioned 74AS74 dual D flip-flops just to the right of the FIFOs, along with a fast 74F00 NAND gate, and with four D flip-flops and some NAND gates you can make a 4-phase clock generator to generate the "write" pulses.  However, the FIFOs still require a 15 ns setup time (the length of time their input data must be stable before the "write" pulse comes along), which is a bit longer than even the full 12.5 ns sample clock period.  I was confused (extra-conservative datasheet specs + aggressive up-screening? why not use the "L12P" parts instead then?) until tracing the connections revealed some extra pins on the bottom side of the board that I wasn't expecting.  Turns out, each FIFO is socketed so that a fast 74AS374 8-bit register can be hidden underneath it, to latch the per-FIFO data and meet each FIFO's setup time requirements.


The circuitry just above the ADC receives the incoming LNA output through a transformer, goes through what looks like a tuned amplifier, and then gets biased by the low-speed analog circuitry at the top-center of the board: the ADC's input range is centered around -1V due to the negative supply voltages of its "classic ECL" implementation.  So overall, this is the RF receive circuitry on this board:

...and this is the analog support section, which generates the ADC's analog and digital supplies, the ADC reference voltage, and the ADC's DC input offset of -1V.  Notice how the analog and digital ADC supplies are made to track each other during power-up/power-down with the anti-parallel diodes between the two: kept within a diode drop of each other, but isolated in an AC sense (few pF of diode capacitance into 100s of nF of filter capacitance) when they're close together, so that the noisy digital supply doesn't contaminate the sensitive analog supply.


One thing that doesn't entirely make sense here to me is why 3 of the 4 op-amps have their negative inputs tied to ground.  The direction of the feedback works, but it puts the full open-loop gain of the op-amp into the feedback loop.  Since the crossover frequency of each op-amp is ~5 Mhz, with the external circuitry in the loop as well (especially those Darlington power transistors, which aren't known for being fast), I'd expect some extra poles to show up and ruin the party, killing the phase margin well before the loop gain drops to 0 dB.  Normally I'd expect a bit of local feedback in the form of an integrating capacitor from the op-amp's output to inverting input, to lower the gain in a controlled way; or at least a parallel cap somewhere in the feedback to create a zero and get a phase boost near crossover.  Then again, the ambiguous datasheet for the BDX43 transistors does show a beta measurement at 35 Mhz, so the Darlingtons might be faster than they seem - and maybe all the emitter degeneration / external resistance choices / etc. puts the non-op-amp circuitry's gain at 1 or less, and makes the overall loop transfer function actually look reasonable.  I mean, either way apparently it works, the question is just whether there's something misleading about my schematic built through continuity-buzzer probing.

How does the RF pulse generation happen though?  Looking at the IC part numbers, there's no obvious high-speed DAC to generate the RF pulses that get fed to the RFPA.  However, tracing the connections shows that there's some hackish magic going on in this area:

The 4-phase 20 Mhz clock used to time-interleave the FIFO writes gets repurposed here to drive a pair of transistors, which do some push-pull action with a transformer to generate the RF output.  The third transistor seems to stabilize the bias point and/or give snappier transitions.  At first I was convinced this additional transistor would be used for amplitude-modulating the output signal, but nothing I found actually supported that.  The RFPA, as shown earlier, doesn't have any modulating controls either, so the output must be constant-amplitude.  The frequency is fixed as well at the main oscillator frequency divided by 4; the only controllable things are the phase (selectable in steps of 90° through the mux) and whether the output is enabled or not, a.k.a. the length of the RF pulse.  These few degrees of freedom would be primitive and extremely limiting for an MRI system used for imaging, but I guess it was good enough here.


The PLA (programmable logic array) chips with the version labels on them seem to run the show and do the pulse generation control, the "NPA12.00" one in particular.  The pulse sequence to generate may be stored in the HM6208 RAMs at the top-right of the board, or the parameters may be much simpler - the CPU board may only send this board a pulse length and phase, for example.  I'm not sure what all the additional coax connectors were used for, maybe doing some external digital I/O.

More analysis and wrap-up

Having the RF pulse frequency fixed at 20 Mhz (unless an external clock is used, which it wasn't for this instrument) is interesting - NMR is ridiculously frequency-selective and narrow-band, and the MR resonant frequency is set by the magnetic field strength.  Since this field strength drifts, and depends on a lot of not-particularly-precise physical parameters (here for example a permanent magnet creates the static field), usually in the imaging world of MRI my understanding is that the magnetic field will "go where it wants" and the transmit frequency is adjustable, to follow the resonance wherever it may be that particular day.  However, since the transmit frequency is fixed here, it makes me wonder if this instrument goes at it from the other direction, and has the ability to adjust its magnet to make the MR resonant frequency match its onboard oscillator, instead of the other way around.  If so, this could take the form of a "shim coil", where a DC current adds or subtract a small percentage to the static field, or temperature control of the permanent magnet to shift its frequency.  I know for sure that the instrument had a heater to stabilize the temperature of the permanent magnet (and therefore stabilize the field strength), OCXO-style, and so the firmware very well may have actively changed the temperature setpoint to get the field strength just where it was needed.

Anyways, I hope this has been an interesting look inside, and that this is possible to follow despite all the jargon.  I've definitely enjoyed picking apart the workings and design decisions here.
 
The following users thanked this post: trophosphere, KE5FX, syau, zrq, miken, vectorotter, esepecesito, Rod

Offline Rod

  • Contributor
  • Posts: 31
  • Country: us
Re: Bruker NMS120 benchtop NMR teardown
« Reply #1 on: September 11, 2021, 09:33:57 am »
The Bruker NMS120 Minispec NMR Relaxometer / Analyzer was apparently produced in only limited numbers from 1993 to 1999.  It could only measure NMR relaxation times: the rate the NMR signal decayed after an RF pulse (free induction decay), or the rate it decayed between a series of RF pulses (spin-echo sequence). 

It was not an NMR spectrometer.  The permanent magnet had very low magnetic field homogenity or uniformity, perhaps roughly 0.05% = 500 ppm = 10 kHz?  It had no electrical shim coils to improve the homogeneity.  The magnet was mechanically shimmed at the factory to align its pole pieces parallel for best resolution, and that was it.  So there was no NMR spectrum to be seen; all protons ((1H hydrogen nuclei) would have the same single broad overlapping resonance frequency.  So it did not Fourier transform the time domain signal into the frequency domain to obtain an NMR spectrum; there was no spectrum to be obtained.

It simply measured the rate at which this 20 MHz RF signal would decay.  For example, a sample might contain both oil (a fast decaying signal) and water (a slowly decaying signal).  It could determine the relative intensity (% content) and decay rate of each.

There was no frequency information in the RF signal; no need for phase sensitive detection.  All the information to be had could be obtained by simple RF amplitude envelope detection.

The analog-to-digital conversion of the received signals is done by the Datel ADC-307-1 near the upper left, in the purple ceramic package.  This flash ADC chip was able to do 125 Msps @ 8 bits, which honestly must've been some pretty hot shit for the late 90s.  Unless the components above it are a very minimal downconverting mixer (which wouldn't make much sense in this context), it must've directly sampled the RF - I believe the magnet strength on this system gave it a Larmor frequency in the ballpark of 15 Mhz, which means that 80 Mhz clock visible in a metal can above the ADC would give an ok oversampling ~2.5x ratio (especially since the LNA stages are likely tuned and so you're not expecting much out-of-band noise, to get aliased). 

That's possible, but there'd be little advantage to be gained by doing so. 

Examining the "RF I/O Board" closely, from the SIG IN connector ST1, it goes through one transistor stage T1.  I see two diodes between T1 and the Datel ADC 307.  Could these be an RF envelope detector, simply rectifying the RF and smoothing its output across one of the four adjacent capacitors?  Could you see whether the trace leading into the ADC input pin 30 leads back to those diodes?

Anyway, from what little I could glean on its specs, it could produce data points at a maximum sampling rate of 10 MSa/s (10 ns) or maybe 20 MSa/s (5 ns).  (At least, that's the fastest digitization rate users could set it to give them.  Bruker were fan-boys of oversampling and decimation to gain dynamic range, so that doesn't prove they didn't digitize faster internally.)

More analysis and wrap-up

Having the RF pulse frequency fixed at 20 Mhz (unless an external clock is used, which it wasn't for this instrument) is interesting - NMR is ridiculously frequency-selective and narrow-band, and the MR resonant frequency is set by the magnetic field strength.  Since this field strength drifts, and depends on a lot of not-particularly-precise physical parameters (here for example a permanent magnet creates the static field), usually in the imaging world of MRI my understanding is that the magnetic field will "go where it wants" and the transmit frequency is adjustable, to follow the resonance wherever it may be that particular day.  However, since the transmit frequency is fixed here, it makes me wonder if this instrument goes at it from the other direction, and has the ability to adjust its magnet to make the MR resonant frequency match its onboard oscillator, instead of the other way around.  If so, this could take the form of a "shim coil", where a DC current adds or subtract a small percentage to the static field, or temperature control of the permanent magnet to shift its frequency.  I know for sure that the instrument had a heater to stabilize the temperature of the permanent magnet (and therefore stabilize the field strength), OCXO-style, and so the firmware very well may have actively changed the temperature setpoint to get the field strength just where it was needed.

Well, with this low-resolution magnet, it couldn't produce a narrow-band signal at all.  The signal from pure water was of the order of 10 kHz wide (~100 us decay time), and from a rigid solid was ~200 kHz wide (5 us decay time).  So it didn't make any difference if the Larmor frequency shifted around, as long as it remained within the bandwidth of the NMR probe (perhaps 500 kHz).  The probe bandwidth had to be larger than the broadest signal anyway, so had to have a low Q.

Interesting!  This was an expensive, high-component-count instrument.  It's easy to see why Bruker was forced to supersede it within only a few years with the MQ series, which do the same thing but are far simpler and cheaper to produce.  Thanks for sharing this.

p.s. More recently, an entire 21.8 MHz NMR spectrometer system-on-a-chip was built[1,2] and is being commercialized as a handheld NMR [3].
[1] https://www.pnas.org/content/early/2014/07/31/1402015111.abstract
[2] http://ham.seas.harvard.edu/upload/papers/2015/emrstm1421.pdf
[3] https://waveguidecorp.com/
« Last Edit: September 11, 2021, 10:22:11 pm by Rod »
 
The following users thanked this post: JohnG, D Straney

Offline D StraneyTopic starter

  • Regular Contributor
  • *
  • Posts: 230
  • Country: us
Re: Bruker NMS120 benchtop NMR teardown
« Reply #2 on: September 12, 2021, 09:53:39 pm »
Great, this is exactly the kind of commentary I was hoping for.

So relaxometry only, huh?  Interesting, I'd given the instrument far too much credit, and as you say that definitely explains the fixed-frequency transmit.  I'm used to single-digit ppm magnet inhomogeneities being a big deal with the MR physicists I talk to, so 500 ppm sounds pretty crazy by comparison (even if reasonable for the application).  Makes sense that they'd do a simplified version as this does feel over-engineered for what it does - would be interesting to see inside one of the MQ series to compare.

I'd assumed those diodes near the ADC were for input clamping, but will have to trace out the whole input circuit now, very well may be some amplitude detection.  I'm a little surprised they used such a fast ADC (trading off resolution and precision, difficulty of storing the readout with that whole memory-interleaving scheme, etc.) likely combined with averaging, rather than just using a slower ADC with better specs in the first place...but I don't know the early-90's ADC landscape, maybe the next models down in sampling rate/analog BW weren't better enough to be worth it.  Or considering they were able to simplify a lot soon after, maybe it just got done this way to get something out there and selling, and pieces of this design were copied as "known good" from their larger-scale NMR or imaging systems.


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf