Author Topic: EEVblog #897 - Radiation Effects On Space Electronics  (Read 18048 times)

0 Members and 1 Guest are viewing this topic.

Offline Howardlong

  • Super Contributor
  • ***
  • Posts: 5410
  • Country: gb
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #25 on: July 08, 2016, 05:46:18 pm »
In the context of what was being discussed, I am certain it is tantalum as, together with aluminium, made as a sandwich this mitigates against radiation by both absorbing and dispersing.

Sometimes there are other elemental metals in the laminate, but they are layered from high atomic number to low atomic number, and sometimes known as graded Z. I don't pretend to understand how it works, but rightly or wrongly I've always likened it to light entering a differing refractive indices and dispersing as a result. The sandwich itself is placed directly on the chip package.

As I mentioned before, this has been happening for years, I think I was first made aware of the practice about 15 years ago at a conference.
 

Offline R005T3r

  • Frequent Contributor
  • **
  • Posts: 387
  • Country: it
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #26 on: July 08, 2016, 06:04:54 pm »
I knew that Erbium was used to adsorb neutrons, but tantalum as a shielding?
 

Offline Howardlong

  • Super Contributor
  • ***
  • Posts: 5410
  • Country: gb
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #27 on: July 08, 2016, 06:14:07 pm »
As I mentioned, look up "Graded-Z shielding".
 

Offline BobC

  • Supporter
  • ****
  • Posts: 119
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #28 on: July 08, 2016, 07:33:29 pm »
Most radiation test regimens start with lifetime dose estimation, generally done by storing the chip next to a radioactive source for a while.  You will want to test with the chip running with a load equivalent to that expected for the mission.

The real fun comes when testing SEEs (Single Event Effects).  For that, you need to actually deposit the equivalent energy present in the regions you care about.

There are lots of beamlines available that will accelerate electrons and (more rarely) protons to relativistic speeds, but these machines can't get anywhere close to cosmic ray energies.  When you need massive energy and you can't add more speed, then you need to hurl larger atoms (well, ions).

The problem with large ions is they lack the penetration depth of lighter particles: Ions slow down almost instantly.  The good news is that chips are thin, and hitting them straight on provides excellent test conditions.  Except the chip's epoxy package is in the way: This means the chips have to be "de-lidded" to expose the silicon (often using friendly stuff like "fuming nitric acid") before putting it in the beamline.

Beamlines that toss heavy ions are tough to find, and even harder to get time on.  The Brookhaven National Laboratory (BNL) on Long Island has the Tandem Van de Graff facility that uses submarine-size Van de Graff generators to accelerate heavy ions to relativistic speeds using only electric fields (most other accelerators rely more on magnetic fields).

This testing must be done in a vacuum, so most of your time is spent doing setup and tear-down, with less time actually bombarding the part.  So it is vital to ensure that no time in the beam is wasted: You don't want your part to have a fatal failure early on, so you start with lighter (but still heavy) ions at lower energies and ramp up from there.  But you need to ramp quickly to get into the desired energy realms, so very careful experiment design is critical.

Since we are slamming the chip to generate SEEs, the experimental setup must be extremely good at recovering from them, preferably far better than the recovery methods planned for the actual mission.  And it must recover quickly, to avoid wasting beam time.

So what you do is first prepare multiple systems to go in the beamline vacuum chamber, preferably at least two at a time. I did two at a time, configured to monitor each other.  Both systems initially boot into "test" mode. The first one to get hit will cycle its power, which is seen by the other system and causes it to switch to "monitoring" mode.

The chamber contains a stage that permits X-Y translation and at least one axis of tilt.  So once the chamber is closed, pumped down, and exposed to the beamline, we start with the stage oriented parallel to the beam until the beamline operator tells us that our selected ion is being streamed at the specified energy and flux (particle rate), at which time we tilt the stage to be normal to the beam.  Then we adjust X-Y until we start seeing hits.  (There is a beamline laser spotter, but it can be difficult to see through the vacuum chamber view ports.)

The test-mode software has to know two things: 1) How to detect that a hit has occurred, and 2)  how to order a power cycle.  Detection is done by continuously scanning flash for changes (a simple checksum suffices), and continuously writing patterns to RAM and checking them. 

To do this as simply and as quickly as possible and still have the software be reliable, it is best to try to get the program to run using only the instruction cache and the available registers. And that typically means writing in assembler, and still taking every possible algorithmic shortcut.

Once you get hit, the power cycle must be fast, and the chip must be discharged as quickly as possible.  It's not enough to simply turn off the power supply: It is critical that VCC is pulled to ground with a very low impedance.  The simplest approach is to have a load resistor in parallel with the system, but quite often even that isn't fast enough.  Even a SPDT relay is often too slow. The problem isn't the power supply: It's getting the residual electrical energy out of the chip.

But putting an FET across the supply output will be plenty fast. A nice, big, fat FFT (BFFET) with extremely low RdsOn and a tolerance for high current spikes.  But even with the BFFET, it is important to guard against ground-bounce, which can reverse-bias your chip and cause damage unrelated to the SEEs (went to the School of Hard Knocks to learn this one).

It is a thing of beauty when it all comes together in the vacuum chamber.  In my case, I was piggybacking on another beamline customer, and could only use the beamline when they weren't.  For some reason they felt sleep was important (wimps), so I had lots of time in the wee hours for my runs, in addition to helping them with their runs.

But the best fun is when you get back and start analyzing the data.  Each SEE is characterized and timestamped, and logged with the beamline configuration.  That data is fed to a program (well, a spreadsheet in our case) to determine how well the chip can be expected to perform in our expected space environment(s).

I forgot to mention that we were testing COTS parts, literally purchased from the stock of a commercial electronics parts distributor.  How did we select our processor?  A foreign silicon manufacturer had been making some amazingly good rad-hard parts, but none of those parts were what we needed.  But they also made a couple lines of microcontrollers that would work fine for us.  So we learned what production lines were used to make the rad-hard parts, then found out which of their processors were made on those lines, and finally selected the one from that list that worked best for us.

We also learned that all rad-hard chips also have excellent thermal performance specifications (they need to), so the only thing special about the processor we purchased was that we got the "automotive" temperature range.

When all the data was analyzed, we calculated the expected processor lifetime and SEE rate were both WELL within our needs.  Years of useful life, when we needed only months.  But we still went with redundant processors, and still bought a tiny number of rad-hard gates for the voting logic.

Just in case we made a mistake in our math.
 
The following users thanked this post: newbrain, MK14

Offline Howardlong

  • Super Contributor
  • ***
  • Posts: 5410
  • Country: gb
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #29 on: July 08, 2016, 08:55:27 pm »
There are radioactive sources and radioactive sources. One problem of trying to characterise behaviour is that the source(s) need to be representative of the environment the unit will end up in. For example, using a Cobalt 60 source is all well and good, but how well does its radiation makeup represent the target environment?

This is one reason there are some Gumstix going up onto the ISS to see how they manage in a real environment, and I assume how they compare to terrestrial models.
 

Offline jeffg

  • Newbie
  • Posts: 2
  • Country: us
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #30 on: July 13, 2016, 04:52:02 am »
Well, he gets a lot of it right.  But some things are confused.

Where you are, does matter.  LEO is a walk in the park.  MEO and HEO are problematic over longer term.  Planetary space is highly problematic even for short periods (the primary reason we haven't had a manned mission to Mars is the chances of instant death from a solar flare or certain cancer due to radiation is in double-digit probability).

Total Dose is ionizing radiation including X-rays, Gamma Rays, protons and ions (which hit things and produce Bremstralung X-rays and Gamma Rays - which is why these are the common theme for damage).  The energy is primarily deposited into materials by "photon interaction at ionizing energy levels".   The materials define how that happens (so there's a Rad(Si) absorption characteristic you integrate over). 

Total dose are not primarily due to interfering with doping but interacting with oxides or dielectrics contacting the semiconductor.  Radiation can induce high energy "charge" injection into oxides by creating defects which trap charge.  The traps get charged and then cause channels to turn on (in MOS) or form parasitic channels where they never existed or were intended (anything else).  So there are unintended current paths and leakage currents created that result in degradation of performance not intended in the chip circuits.   Enough damage/leakage and analog circuits debias and digital circuit shift their logic thresholds.  The net result - circuits stop working.

Single Event is primarily highly ionized heavy ions (e.g. Fe+8, Au+9, etc.) that are traveling at relativistic speeds.  Also includes are x-ray and gamma ray pulses from the Sun and supernova.  These carry and deposit INSANE amounts of energy into a small space.  In semiconductors they can produce >1000x more hole-electron pairs than normal exist during normal device operation.  Obviously this disrupts "normal" device operation pretty seriously.

Shielding for total dose actually increases total dose when you have cosmic ray single-event.  There is a process called "spallation" that occurs when the ion hits an atom in your shield (higher Z in the shield makes the spallation worse).  Spallation products include protons and ions and they get generated 10:1 to 1000:1 from a single cosmic ray ion, and they "manifest" on the other side of the shielding.

This is part of why shielding is limited in effectiveness.  Strictly if it weren't for single-event, more is better but doubling thickness only halves the shielding: it's exponentially diminishing returns and thus it never goes to zero.  Ironically you have shielded for external dose but increased the interactions for cosmic rays to create increased internal total dose created inside and after the shielding!  Nature is devious!  Spallation in the semiconductor can interact with doping but generally the dose rate is too small to affect end-of-life failure.

He's talking about Ta (tantalum) field shielding.  It has higher Z (atomic weight) than aluminum but a density similar to Al so it's attractive for space weight budgets.  Spallation and weight limits shielding practicality.

It's NOT paperwork or testing for radiation hardening - this is wrong for high hardness.  Usually ICs need to be designed specifically for maximum hardening (anything over 1 MRad).  From transistor design up to layout design rules are different.  Radiation tolerant and low dose hardened (e.g. 10 KRad - 500 KRad), can be "binned out" with testing. 

But there are fundamental problems with ever using binning for reliability (which radiation effects is a subset of) - it's generally a dangerous strategy: you can't "test to reliability" - you can only "design to reliability".  Testing is part of assuring the design will be reliably but the cause-and-effect only goes in one direction.  This is generally true of all reliability, not just radiation effects.  Military electronic has a long history of Epic Fail when people tried to reverse the arrow of causation in this regard.

Ironically feature size often improves single-event hardness because the target cross-section volume is smaller for each device BUT you get more transistors which can be upset.  So you still need error correction or reset/watch-dogging.  Single event often triggers a parasitic SCR/Diac that exists in a bulk CMOS substrate/well structure, which results in latch-up.  SOI, FinFETs, etc. can address this also but there are usually major cost penalties that often make bulk CMOS a better choice (with the right design).

DRAM is super sensitive because you are storing (tiny) charge but it's charge that radiation is spuriously introducing at random into the IC circuitry.  This is why you go with SRAM if you want super hardened memory.

There's also neutron damage but they only come from nuclear weapons.   >:D
Only neutron and some spallation product radiation cause a great deal of doping disruption.

A good reference intro is Messenger & Ash

https://books.google.com/books/about/The_effects_of_radiation_on_electronic_s.html?id=aQFTAAAAMAAJ

Also IEEE NSREC papers over the last 50 years.  The conference is fun also.

http://www.nsrec.com

When you design the system, you start with a mission lifetime, you use software like what he describes to estimate the accumulated radiation dose of various types for various orbital paths and injections, and then that has to balance with the hardness you achieve with part specification and shielding.   You typical start with a "10 year life" and then you consider anything beyond that "a gift" but anything less is within your radiation dose budget.

I used to do radiation effects testing for DOD satellites/systems.  The Co-60 source we used was 2000 Rads/second btw.  Being involved in this actually got me started with testing with stuff like SMUs and LCR meters and deeper into device physics.  You have to work at the device level to understand radiation effects and design for hardness.
 
The following users thanked this post: apis, Howardlong, Brumby, newbrain, MK14

Offline Howardlong

  • Super Contributor
  • ***
  • Posts: 5410
  • Country: gb
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #31 on: July 13, 2016, 05:27:01 am »
Awesome second post jeffg!
 

Offline nwvlab

  • Regular Contributor
  • *
  • Posts: 65
  • Country: it
    • next-hack.com
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #32 on: July 20, 2016, 07:35:27 pm »

Total dose are not primarily due to interfering with doping but interacting with oxides or dielectrics contacting the semiconductor.  Radiation can induce high energy "charge" injection into oxides by creating defects which trap charge.  The traps get charged and then cause channels to turn on (in MOS) or form parasitic channels where they never existed or were intended (anything else).  So there are unintended current paths and leakage currents created that result in degradation of performance not intended in the chip circuits.   Enough damage/leakage and analog circuits debias and digital circuit shift their logic thresholds.  The net result - circuits stop working.

With ultra-thin gate oxides (<5nm), most of the charge trapping occur in thick oxides regions (e.g. the STI regions). Besides that, we found also that radiation damages the Si/SiO2 interface (creating fast and border traps). These will increase the subthreshold swing, which, in turn, can either increase the effective threshold voltage or increase the off-state leakage current (or both).

Also, the dose rate is a concern for some devices, and its effect might be counter-intuitive. Surprisingly, initially it was found that some integrated BJTs were "very" robust against TID. Then, people started to analyse the effects of lower dose rates. They discovered the so called enhanced low dose rate sensitivity (ELDRS), which literally killed them.

Offline hkBattousai

  • Regular Contributor
  • *
  • Posts: 117
  • Country: 00
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #33 on: December 16, 2017, 03:10:44 pm »
Can you give the address of the website about that space simulation software mentioned in the video?
 

Offline Howardlong

  • Super Contributor
  • ***
  • Posts: 5410
  • Country: gb
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #34 on: December 17, 2017, 12:32:22 pm »
Can you give the address of the website about that space simulation software mentioned in the video?

Try looking up SPENVIS and CREME96. I am not current on this, the last time I used these models was some years ago through a web interface. Also note that there are two primary effects to note, long term total ionising dose (TID), and short term single event effects (SEE). There are also space weather effects to consider which can markedly change according to relative time within the sunspot cycle: I think SPENVIS deals with this, not sure about CREME96.
 

Offline Neomys Sapiens

  • Super Contributor
  • ***
  • Posts: 3268
  • Country: de
Re: EEVblog #897 - Radiation Effects On Space Electronics
« Reply #35 on: December 17, 2017, 03:37:19 pm »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf