Well, he gets a lot of it right. But some things are confused.
Where you are, does matter. LEO is a walk in the park. MEO and HEO are problematic over longer term. Planetary space is highly problematic even for short periods (the primary reason we haven't had a manned mission to Mars is the chances of instant death from a solar flare or certain cancer due to radiation is in double-digit probability).
Total Dose is ionizing radiation including X-rays, Gamma Rays, protons and ions (which hit things and produce Bremstralung X-rays and Gamma Rays - which is why these are the common theme for damage). The energy is primarily deposited into materials by "photon interaction at ionizing energy levels". The materials define how that happens (so there's a Rad(Si) absorption characteristic you integrate over).
Total dose are not primarily due to interfering with doping but interacting with oxides or dielectrics contacting the semiconductor. Radiation can induce high energy "charge" injection into oxides by creating defects which trap charge. The traps get charged and then cause channels to turn on (in MOS) or form parasitic channels where they never existed or were intended (anything else). So there are unintended current paths and leakage currents created that result in degradation of performance not intended in the chip circuits. Enough damage/leakage and analog circuits debias and digital circuit shift their logic thresholds. The net result - circuits stop working.
Single Event is primarily highly ionized heavy ions (e.g. Fe+8, Au+9, etc.) that are traveling at relativistic speeds. Also includes are x-ray and gamma ray pulses from the Sun and supernova. These carry and deposit INSANE amounts of energy into a small space. In semiconductors they can produce >1000x more hole-electron pairs than normal exist during normal device operation. Obviously this disrupts "normal" device operation pretty seriously.
Shielding for total dose actually increases total dose when you have cosmic ray single-event. There is a process called "spallation" that occurs when the ion hits an atom in your shield (higher Z in the shield makes the spallation worse). Spallation products include protons and ions and they get generated 10:1 to 1000:1 from a single cosmic ray ion, and they "manifest" on the other side of the shielding.
This is part of why shielding is limited in effectiveness. Strictly if it weren't for single-event, more is better but doubling thickness only halves the shielding: it's exponentially diminishing returns and thus it never goes to zero. Ironically you have shielded for external dose but increased the interactions for cosmic rays to create increased internal total dose created inside and after the shielding! Nature is devious! Spallation in the semiconductor can interact with doping but generally the dose rate is too small to affect end-of-life failure.
He's talking about Ta (tantalum) field shielding. It has higher Z (atomic weight) than aluminum but a density similar to Al so it's attractive for space weight budgets. Spallation and weight limits shielding practicality.
It's NOT paperwork or testing for radiation hardening - this is wrong for high hardness. Usually ICs need to be designed specifically for maximum hardening (anything over 1 MRad). From transistor design up to layout design rules are different. Radiation tolerant and low dose hardened (e.g. 10 KRad - 500 KRad), can be "binned out" with testing.
But there are fundamental problems with ever using binning for reliability (which radiation effects is a subset of) - it's generally a dangerous strategy: you can't "test to reliability" - you can only "design to reliability". Testing is part of assuring the design will be reliably but the cause-and-effect only goes in one direction. This is generally true of all reliability, not just radiation effects. Military electronic has a long history of Epic Fail when people tried to reverse the arrow of causation in this regard.
Ironically feature size often improves single-event hardness because the target cross-section volume is smaller for each device BUT you get more transistors which can be upset. So you still need error correction or reset/watch-dogging. Single event often triggers a parasitic SCR/Diac that exists in a bulk CMOS substrate/well structure, which results in latch-up. SOI, FinFETs, etc. can address this also but there are usually major cost penalties that often make bulk CMOS a better choice (with the right design).
DRAM is super sensitive because you are storing (tiny) charge but it's charge that radiation is spuriously introducing at random into the IC circuitry. This is why you go with SRAM if you want super hardened memory.
There's also neutron damage but they only come from nuclear weapons.
Only neutron and some spallation product radiation cause a great deal of doping disruption.
A good reference intro is Messenger & Ash
https://books.google.com/books/about/The_effects_of_radiation_on_electronic_s.html?id=aQFTAAAAMAAJAlso IEEE NSREC papers over the last 50 years. The conference is fun also.
http://www.nsrec.comWhen you design the system, you start with a mission lifetime, you use software like what he describes to estimate the accumulated radiation dose of various types for various orbital paths and injections, and then that has to balance with the hardness you achieve with part specification and shielding. You typical start with a "10 year life" and then you consider anything beyond that "a gift" but anything less is within your radiation dose budget.
I used to do radiation effects testing for DOD satellites/systems. The Co-60 source we used was 2000 Rads/second btw. Being involved in this actually got me started with testing with stuff like SMUs and LCR meters and deeper into device physics. You have to work at the device level to understand radiation effects and design for hardness.