Author Topic: Semiconductor gurus, how do I model a 1D bipolar transistor?  (Read 10663 times)

0 Members and 1 Guest are viewing this topic.

Offline berkeTopic starter

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: fr
  • F4WCO
Semiconductor gurus, how do I model a 1D bipolar transistor?
« on: February 06, 2024, 10:07:20 am »
Yesterday, I was trying to analyze a common-base amplifier (for my education and there is another thread), and I noticed that the base current predicted from forward beta is much lower than the base current I get with SPICE, something like 60 µA vs 100 µA for example.  In other words the beta appeared to be lower.  Emitter current 5 mA.

From there I started digging a bit.  It wasn't because of the usual VBE=0.7 V approximation, in fact plugging the voltages and forward/reverse gains into Ebers-Moll didn't do the trick either.

As explained in many books, a BJT can be modeled using two diodes joined at the base (common anode for NPN), each diode being paralleled with a linear current source that depends on the current in the other diode.  Writing the Shockley equations for the diodes, and using the forward and reverse coefficients for the current sources we get the Ebers-Moll model which I think is a thing of beauty given its symmetry.

In the real world this model is OK for DC biasing unless, as I've found out yesterday, you have low or high base currents.  Reading Getreu's 1978 book I tried the next model he designates "EM2" and which adds terminal resistors (and capacitances I don't care about for DC), but these resistances and the base current being small they didn't change the results much and the excess IB remains unexplained.

The next "EM3" model includes the "Early effect" which apparently is the consequence of the "base width" being modulated by the CB voltage.  With an Early voltage of 100V for my particular transistor and around 5V VCB this predicts only a small ~5% change in Is and beta, still not explaining the base current.

As I read the rest of the chapter I learn that there are three base current regimes, the low current, middle current and high current ones, and that these can be modeled by adding an extra pair of diodes in parallel with different non-ideality coefficients to adjust the slopes.  In SPICE there are ideality parameters Ne, Nc and regime transition currents Ikf, Ikr for that.

Long story short, at very low and very high currents the gain degrades.

Now plopping the equations in Octave or Maxima won't be a problem but that doesn't give much insight and I've been longing for a better intuitive understanding of transistors since being a teenager, so I ask...

How the heck can I physically model a 1D transistor?

I want the simplest model that allows me to physically reproduce the three base current regimes.  I don't have a problem with math or numerical computation, my problem is more conceptual, I have a hard time "getting" semiconductor physics.

Intuitively I would have the transistor state at time t represented by q(x,t), which could be vector-valued if needed.  I assume that given V(C) and V(E), and maybe their first-order derivatives, I should be able to compute I(C) and I(E) and q(x,t+dt) and step it from there, but that might very well be a completely wrong approach.

Can the DC transistor be modeled like this?  If yes how do I represent this system, i.e. what goes into q(x,t)?  If not, how?

Yes I've heard about Google but I get too many results going in too many different directions and often for FETs.
« Last Edit: February 06, 2024, 10:09:10 am by berke »
 

Online moffy

  • Super Contributor
  • ***
  • Posts: 2122
  • Country: au
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #1 on: February 06, 2024, 10:25:10 am »
If what you are talking about is the variation of Hfe vs Ic it can vary enormously from transistor type to type say the LM394 vs the 2N3904, which I suspect is very design and process dependent. I am not sure that there is a simple answer to the problem. Would love it if there was.
 

Offline berkeTopic starter

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: fr
  • F4WCO
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #2 on: February 06, 2024, 05:28:35 pm »
If what you are talking about is the variation of Hfe vs Ic it can vary enormously from transistor type to type say the LM394 vs the 2N3904, which I suspect is very design and process dependent. I am not sure that there is a simple answer to the problem. Would love it if there was.
Yes that's the effect, it's more properly expressed as gain vs. collector current as in this plot
2005326-0

I didn't quite catch the link between that and the Early effect, if any.

The effect does of course change from device to device, but no I'm only interested in physically understanding one specific type of device at a time.

From Getreu's book in the low current regions there are three physical process at play that depend on the device geometry and the doping profile and other semiconductory stuff and at the end of the day those can be modeled as a parallel stack of non-ideal diodes.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #3 on: February 06, 2024, 07:01:10 pm »
There is a far simpler explanation: SPICE models are approximations of reality, fitted to curves with various compromises.  hFE at low currents tends to be a common sacrifice; inverted hFE especially so.  Note that the Gummel-Poon model (most commonly used) doesn't include breakdown, so you will see divergence at high Vce and Veb also.

Physical modeling is possible but you will have to set up a diffusion transport model and input the generation, recombination and motion of charges through drift, built-in potentials, and junctions.  The model will run very slowly for any in-circuit applications, but -- assuming you have accurate and representative doping profiles and physics parameters -- it can be quite accurate.

Mind also that real transistors are often dominated by sidewall and surface effects.  Surface states contribute field dependency, sometimes time- and history-dependent effects (ion migration, charge trapping), and perimeter is a double-edged sword between increased capacitance (or, it can be) and more contact points (less base spreading resistance Rbb' from contact to any point within the base region; and similarly for E and C regions too).  Typical RF transistors are made as very thin strips, tightly interleaved.  General-purpose power transistors suffer such slow speed (fT ~ MHz) mainly just because it's bothersome to make a ton of connections (and, given the higher voltage ratings, very fine-pitch connections might simply not be possible).

A 1D model can capture the basics of BJT behavior -- in fact, with a number of reasonable approximations, a more-or-less analytical result is possible, hence Ebers-Moll, Gummel-Poon, and other constitutive equations given in undergrad semiconductor classes that I've long since forgotten -- but there are particulars due to 2D and 3D aspects that these obviously cannot capture.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline berkeTopic starter

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: fr
  • F4WCO
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #4 on: February 06, 2024, 07:59:17 pm »
Physical modeling is possible but you will have to set up a diffusion transport model and input the generation, recombination and motion of charges through drift, built-in potentials, and junctions.  The model will run very slowly for any in-circuit applications, but -- assuming you have accurate and representative doping profiles and physics parameters -- it can be quite accurate.

That's exactly what I want to do, but I don't know how to proceed.

A big help would be knowing how to represent the state of the device.  I could then add the processes one by one.  When I attempt to read semiconductor books they quickly start invoking quantized momentum distributions for the electrons and energy level distributions.  Not sure if I need that.

For the purposes of modeling a 1D transistor, can one describe the state of the device at a given time using two linear density functions, one for each, say f_e(x) and f_h(x) in C/m for electrons and holes, and a potential V(x) with boundary conditions at V(-1) (emitter), V(0) (base) and V(1)?

Then solve the electrostatic field, get the gradient, apply drift and recombination and a bit of carrier generation?

Or do we need to know more such as energy level distributions at each x?
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #5 on: February 06, 2024, 09:24:50 pm »
It'll be more like, for each differential segment along the path, there is some concentration of carriers n_n and n_p, which can be assumed thermalized (at valence or conduction band energies +/- some meV), and which diffuse into neighboring populations.  That handles most of the QM you'd be worried about, and also staying away from very small boundaries (~nm, where tunneling would take place).  And then, yes, drift due to E-field; and tracking current flows by charge balance.

What wouldn't be handled, is stuff like hot carriers (energy levels transiently deep into the conduction band), but that's mainly for avalanche, EEPROMs, etc., which are perhaps beyond the scope of simulation.  Even then, I suppose you could track energies by a histogram, and upper bins decay within the differential segment, while spreading out to neighboring segments at a dependent diffusion rate, or ballistically if they have trajectory (in which case you'd need to track that as well), plus some scattering probability.  I'm not familiar with hot carrier motion so I'm not sure what the best way to handle that would be.

Generation and recombination would then set the temperature, along with the diffusion constant.

Power dissipation -- more generally even, tracking total energy, maintaining energy conservation -- would be a nice-to-have, or maybe even necessary.  Local heating isn't usually a problem (and doesn't really mean anything for a 1D model)

Hmm, is there anything about mobility, trapping, doping and doping levels, etc. that I'm forgetting?  Probably.

Oh, Fermi levels, band edges and stuff.

And if you want to assume material characteristics, then you can, I think, handle stuff like Brillouin zones and in/direct bandgap as material properties, but otherwise those might be of interest to handle (and would probably be desirable or required for a 2D or 3D simulator).

Oh, and Debye shielding length comes to mind, which sort of goes along with tunneling with respect to very small feature scales; it can be lumped into material properties, differential equations, on larger scales.

...

Oh here we go, you can extract equations and models from this, since it's open source. Bit of a roundabout way to go, heh, but to say you want to write your own, having a full example is certainly a powerful reference.
https://www.gnu.org/software/archimedes/


I don't know that your problem statement, your immediate complete goal, is... really all that meaningful?, outside of having taken the courses that introduce this topic -- at least, it sounds like you're not in school? (or, maybe haven't been in a long while, or are currently but haven't gotten to this yet -- many possibilities, not trying to assume any case here), and, without the semiconductor theory underlying your design, will you really know whether something is physically meaningful or not?  Will your toy be just a toy; or will it meaningfully model real devices?  Not to impugn your level of education---just to say, if you had these classes, you should have some idea of, where to start at least, if not how to begin work already; so it sounds like you haven't.  And also to say: if not, then this is the place you need to look -- be it direct enrollment, or textbooks, open course material, notes, etc.

Like I said, it's been a long time since I took such a class, but I recall a good amount of stuff in the intro semi class, including the basic equations underlying the BJT and MOSFET.  I don't recall nearly enough of that now to offer an authoritative answer, but this is the first place that I'd start.

Which on that note -- the textbook I had was, let me see here... Solid State Electronic Devices, Streetman and Banerjee, 6th ed., Prentice Hall (2006).  Probably pretty outdated, but let's see... ah, there's a 7th edition (2014), and, uh, it looks -- widely distributed shall we say -- online, which I guess means it's quite popular still, so that might be encouraging.

There are of course other more in-depth and practical titles on semi manufacture, design, processing, etc., VLSI and so on, which I did not go into academically myself, but I see questions from students working in those topics from time to time and you should find similar value from those topics as well.

As for scope, pacing, expectations -- beware that, in the space of differential equations and their numerical solution, one can spend months, years really, perfecting an engine to do it, architecting it for flexibility and insight (from basic function to low-level debugging), while making it effective over wide ranges of equations and systems.  Even just a 1D solver is a big project.  On top of this, you want a modeling system tuned specifically for the equations of semiconductor physics, and the equations of state which underlie them.  And finally, to get realistic doping, charge and current densities, with boundary conditions or distributions properly representative of real manufactured/able parts, to actually get simulation results.  This is easily a whole undergrad EE-CE joint capstone project, and probably graduate level besides.  Maybe you find shortcuts, plug in a general-purpose solver here, state equations there, etc. etc., but the more you leave untouched, the more uncertain you are about the overall form and function, how fragile it is for various edge cases, etc., and you can really only test it; and testing is a notoriously weak method of interrogation in CS.  Conversely, if you already had all the equations ready to go, a solver handy (it might even be implemented in SPICE, using nodes to represent differential cells, if you don't need the differential scale to vary during a run), it could still take weeks to go from an empty sketch to meaningful results.

Good luck!

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: berke

Offline berkeTopic starter

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: fr
  • F4WCO
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #6 on: February 06, 2024, 11:05:45 pm »
It'll be more like, for each differential segment along the path, there is some concentration of carriers n_n and n_p, which can be assumed thermalized (at valence or conduction band energies +/- some meV), and which diffuse into neighboring populations.  That handles most of the QM you'd be worried about, and also staying away from very small boundaries (~nm, where tunneling would take place).  And then, yes, drift due to E-field; and tracking current flows by charge balance.

What wouldn't be handled, is stuff like hot carriers (energy levels transiently deep into the conduction band), but that's mainly for avalanche, EEPROMs, etc., which are perhaps beyond the scope of simulation.  Even then, I suppose you could track energies by a histogram, and upper bins decay within the differential segment, while spreading out to neighboring segments at a dependent diffusion rate, or ballistically if they have trajectory (in which case you'd need to track that as well), plus some
scattering probability.  I'm not familiar with hot carrier motion so I'm not sure what the best way to handle that would be.
So the diffusion rate is energy-dependent?  Interesting.  That's a useful overview you gave.

Quote
Hmm, is there anything about mobility, trapping, doping and doping levels, etc. that I'm forgetting?  Probably.

Oh, Fermi levels, band edges and stuff.
That's kind of the heart of the matter isn't it?  I don't want to just simulate a chunk of conducting material.  But let me read some more before asking further questions.

Quote
Oh here we go, you can extract equations and models from this, since it's open source. Bit of a roundabout way to go, heh, but to say you want to write your own, having a full example is certainly a powerful reference.
https://www.gnu.org/software/archimedes/
Thanks that's absolutely perfect!  Just downloaded, small readable C code that's right on target, with documentation.

Quote
I don't know that your problem statement, your immediate complete goal, is... really all that meaningful?, outside of having taken the courses that introduce this topic -- at least, it sounds like you're not in school? (or, maybe haven't been in a long while, or are currently but haven't gotten to this yet -- many possibilities, not trying to assume any case here), and, without the semiconductor theory underlying your design, will you really know whether something is physically meaningful or not?  Will your toy be just a toy; or will it meaningfully model real devices?  Not to impugn your level of education---just to say, if you had these classes, you should have some idea of, where to start at least, if not how to begin work already; so it sounds like you haven't.  And also to say: if not, then this is the place you need to look -- be it direct enrollment, or textbooks, open course material, notes, etc.
No I'm not in school and I don't have an EE background, but I need to properly understand transistors before my kids get old enough to ask me embarrassing questions about them!

As you've said, the SPICE models (GP etc.) are mostly empirical, specialized curve-fitting results expressed as circuits.  I want to open the black box that is transistors and see how the magic smoke actually works.  Then I can get back to regular SPICE & soldering.

Quote
Which on that note -- the textbook I had was, let me see here... Solid State Electronic Devices, Streetman and Banerjee, 6th ed., Prentice Hall (2006).  Probably pretty outdated, but let's see... ah, there's a 7th edition (2014), and, uh, it looks -- widely distributed shall we say -- online, which I guess means it's quite popular still, so that might be encouraging.
Thanks for that tip!  I just checked out one copy from my local small town library which amazingly has a semiconductor physics section right between gardening and spiritual meditation/yoga.

Quote
There are of course other more in-depth and practical titles on semi manufacture, design, processing, etc., VLSI and so on, which I did not go into academically myself, but I see questions from students working in those topics from time to time and you should find similar value from those topics as well.
So far this is for my curiosity, but professionally if I ever manage to develop a more general understanding of semiconductors I might try to apply that to photodector arrays, there are cameras used in remote sensing that have really nasty behaviour requiring software correction, and I've spent enough time pulling my hair trying to do that empirically without an understanding of how these things really work at the semiconductor level.

Quote
As for scope, pacing, expectations -- beware that, in the space of differential equations and their numerical solution, one can spend months, years really, perfecting an engine to do it, architecting it for flexibility and insight (from basic function to low-level debugging), while making it effective over wide ranges of equations and systems.  Even just a 1D solver is a big project.  On top of this, you want a modeling system tuned specifically for the equations of semiconductor physics, and the equations of state which underlie them.  And finally, to get realistic doping, charge and current densities, with boundary conditions or distributions properly representative of real manufactured/able parts, to actually get simulation results.  This is easily a whole undergrad EE-CE joint capstone project, and probably graduate level besides.
I know that numerical computation is no laughing matter but what you describe is quite beyond the scope of what I want to do, right now I'm looking for a proof of concept to understand things.  I don't even necessarily want to write my own code, if for example I find Archimedes readable (just had a glance seems to be the case).

But if I write something, efficiency is not a goal at all, heating the room by throwing 32 cores or some CUDA for a week on a ridiculously inefficient but easy to implement MC method is acceptable.

Quote
Maybe you find shortcuts, plug in a general-purpose solver here, state equations there, etc. etc., but the more you leave untouched, the more uncertain you are about the overall form and function, how fragile it is for various edge cases, etc., and you can really only test it; and testing is a notoriously weak method of interrogation in CS.  Conversely, if you already had all the equations ready to go, a solver handy (it might even be implemented in SPICE, using nodes to represent differential cells, if you don't need the differential scale to vary during a run), it could still take weeks to go from an empty sketch to meaningful results.

Good luck!

Tim
Thanks again for your very helpful responses.  I'll do some more reading and report back.
 

Offline berkeTopic starter

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: fr
  • F4WCO
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #7 on: February 07, 2024, 04:45:26 pm »
I've started reading Archimede's document.  To summarize, four kinds of charged particles define an electrostatic field: moving holes and electrons, and bound ones (donors and acceptors), these are given by four number densities.  I've already written a few electostatic solvers so I understand how to get from there to potentials and fields.  What's new for me is the Boltzmann diffusion equation.  At each time step, for each point in space there is a distribution of momenta, presumably there is one for electrons and one for holes.  He has simpler equations for the case where the quantum effects are negligible, as I'm interested in "desktop" transistors that is my case.  There is a collision term, I haven't figured out yet if it's electron-electron or electron-lattice collisions.  What's interesting is that there is an energy band function, which only depends on the momenta, but that affects the way the charges spread around.  I obviously need to get up to speed on this Boltzmann diffusion equation but that's yet another rabbit hole.

That reminded me of an example in page 54 of Millman and Halkias' book (which interestingly was also available at the town library) about an idealized vacuum diode where electrons leave a cathode and are accelerated towards an anode.  Depending on the energy with which they leave the cathode they are able to make it to the anode, or not.

2006585-0

I'm not sure yet but it sounds like the situation is pretty similar to what's happening in semiconductors, maybe that energy band acts as some kind of virtual spacing between electrodes, preventing flow unless the potentials are right.

The basic operation of vacuum tubes is very easy to understand even with high school physics.  You place a grid that attracts or repels electrons that boil off from a filament so that they hit a plate or not based on the grid voltage.  (Apparently that's analogous to FETs but I'm not sure how you get that famous quadratic law.)  If I understand correctly, the grid has a small intercepting surface and it doesn't catch many electrons, so the grid current can be neglected and this is a voltage-controlled voltage device, easy to understand using classical physics and bullet electrons.

I will now do some digging to see if I can find literature about a conceptual vacuum analogue of bipolar transistors.

Maybe a two filament device, the emitter having a high temperature and the collector having a lower temperature and a base grid...

But before I get lost, I want to restate my current goal: a simple, physics-based model of the bipolar transistor that is good enough for DC bias predictions, that will match the default SPICE model at least at low and mid currents.  Breakdown and AC effects not needed, performance doesn't matter and this is for tabletop transistors not 10 nm IC thingies.

Speaking of performance, Archimedes (v0.01) takes maybe 10 minutes to calculate the state of a Si diode at 5 ps.  (I'm not complaining.)
« Last Edit: February 07, 2024, 04:47:58 pm by berke »
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #8 on: February 07, 2024, 07:09:38 pm »
Bandgap is a tricky concept to understand, because it is a quantum thing; it arises as the stopband in a system of strongly-coupled resonators.

That is, take an atom.  Ignore for a moment that it's a quantum system, and just suppose it's a simple harmonic oscillator (SHO).  It's resonant, in that it can absorb and re-radiate EM energy -- photons; but that they're photons, doesn't matter, and average out the probability of absorption/emission as an EM field with a coupling interaction.  Basically, say it's a resonant dipole.

Now pack two together, you have a di-atom.  The resonators couple to each other, instead of one resonant peak of double the intensity, you get pole-splitting, they "repel", you get two peaks.  How far apart, depends on their coupling, but let's say they're very close together so they couple strongly.  In electrical terms, there's effectively some transformer action between the inductances, and one peak is the two capacitors resonating against each other in series with the leakage inductance between them, and the other peak is the magnetizing inductance resonating with the capacitors in parallel.

So it is with atoms, except with quanta and selection rules and so on, but again, don't worry about that.

Now put a whole bunch together.  More peaks are introduced, and they spread out further; what ends up happening is, a whole bunch of peaks cluster below a lower cutoff, and above an upper cutoff.  No peaks appear in the middle stopband.

This is a bandgap structure.

We can make EBG (electromagnetic bandgap) structures, by just putting some periodic structure in an EM field.  If you run a PCB trace over a perforated ground plane, there will be a stopband in the frequency response of that trace, corresponding to the pitch of the perforations.

Well, the identical thing happens in QM, except instead of EM waves, we have matter waves.  What's the "frequency" of an electron?  Don't worry about it, but they respond to a grid of atoms the same way.  It's the periodic potentials of the atoms arrayed in a crystal, that gives rise to the bandgap.

But matter-wave frequency is also energy, and so we have an energy gap in the system.

It's a levels-of-abstraction thing.  Bandgap is a material property, that applies more or less everywhere within the solid.  It's not a geometric constraint, a boundary condition; it's implicit everywhere.

Also, matter-waves have different velocity, or effective mass, depending on things (energy, and direction through the lattice), i.e. it's a dispersive system; so you would in general expect conduction electrons, or hot carriers for that matter, to have different velocity than holes.  And indeed there are different mobilities for each, so that checks out.

Well, the directional dependency isn't so much about dispersion, but that's like a birefringence thing, like how the optical properties of a clear crystal vary versus crystal axis.

The bandgap is an energy thing, so it's skewed by electric potential across the junction.  A PN junction in forward bias, has more energy at one end than the other, which results in excess carriers being generated there, which flow along the gradient.  In reverse bias, the opposite does not occur

Bandgap can be varied along a junction, by alloying the semiconductor (heterojunction), or by joining it to entirely different materials (schottky junction), assuming they are mechanically and chemically compatible of course (for which if not, chemical reactions occur and it's really just some other material in the interface; or they crack or delaminate and it's not a junction at all).  The bandgap then varies with position along the junction -- again, it's a material property at each point, it's not a physical gap -- and according to rules (Fermi energy is the same everywhere, I think?), the actual band edges can move around, particularly near such a boundary.  Which is how schottky diodes work, a rectifying junction where the electron gas within a metal (Ef lies in conduction band; valence and conduction bands may also overlap) skews the bandgap much as a PN junction would (I'm not remembering at the moment exactly how this works, apologies), but there is an immediate and unlimited source of electrons in the metal, so the switching speed can be extremely fast (also where the old term "hot carrier diode" came from).  There can also be a resistive schottky junction, where the bands overlap with a skew such that free holes/electrons are present at the surface of the semiconductor and therefore it is essentially resistive there (or something like that? maybe these cases are backwards, lol).

We can identify a "bandgap" in the vacuum diode, but it is the energy barrier between valence/conduction state (carriers trapped within the cathode) and the vacuum state (free unbound electrons).  Which is why work function must be overcome, and why low-work-function materials are used for the cathode, so they don't have to be heated so much.  Even for those cases (i.e. with suitable choice of material), the energy barrier is higher than for semiconductors like Si, and so cathodes must be heated.  (We could very well have a conduction bandgap high enough that heating is required -- otherwise-insulating crystals like MgO or diamond-C indeed become conductive when heated enough, for example.  The intrinsic carrier concentration is determined by thermal energy pushing valence electrons into the conduction band, so is exponentially lower for MgO's ~7.7eV bandgap than it is for Si's 1.1eV.  Or why low-bandgap materials like PbS (galena, cats-whisker detector!) are nearly conductive at room temperature (Eg ~ 0.4eV, Eth ~ 26meV).)

The stuff inbetween the electrodes doesn't depend on bandgap, it's just boundary conditions; the vacuum triode and FET are very similar devices, generally speaking, the main difference being the ballistic transport (free electrons in vacuum!) of the vacuum triode give the 3/2 power law, while for diffusive transport, it's a solid 2.

Most semiconductors exhibit impact ionization (and subsequent avalanche breakdown) at high electric field, limiting carrier velocity to thermal drift levels; higher-Eg materials can stave this off though, and experience ballistic transport, analogous to the vacuum device.  Although I don't think this has much consequence on FET characteristics, as it would still be thermalized where control is being done (in the channel under the gate, versus in the depleted bulk/drift region).

Velocity saturation -- ballistic transport within a semiconductor -- does allow current and voltage to become decoupled in a drift region; this is the basis of the Gunn diode, using GaAs for instance, biased to give carrier energy somewhere above thermal drift, but just below avalanche breakdown (typically ~10V).  The effect is, current falls as voltage rises through this region, i.e. a negative (incremental) resistance characteristic.  They're a bit fragile, as the current and therefore power density is quite high, and the voltage must be limited not much above nominal.  But the carrier velocity modulation occurs extremely quickly (~ps), allowing oscillation at extremely high frequencies, hence their use in microwave gadgets.

Perhaps if someone made a SIT (static induction transistor) from GaAs or whatever, we could observe the 3/2 power law at high bias? lol

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline berkeTopic starter

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: fr
  • F4WCO
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #9 on: February 08, 2024, 10:30:44 pm »
Bandgap is a tricky concept to understand, because it is a quantum thing; it arises as the stopband in a system of strongly-coupled resonators.

That is, take an atom.  Ignore for a moment that it's a quantum system, and just suppose it's a simple harmonic oscillator (SHO).  It's resonant, in that it can absorb and re-radiate EM energy -- photons; but that they're photons, doesn't matter, and average out the probability of absorption/emission as an EM field with a coupling interaction.  Basically, say it's a resonant dipole ...
(snip)
...and the other peak is the magnetizing inductance resonating with the capacitors in parallel.
It's starting to make sense, but slowly.  I'm aware of the basic concepts of QM such wave functions and how position, velocity and other properties can be observed from them using inner products, and how you can have delocalized (plane wave-like) wavefunctions vs. more localized ones (Gaussian wave packets), however the pieces don't fit together yet in my head.  Looks like there is no way of understanding bipolar transistors by skipping this stuff.

If I follow, just like you can make a microwave filter from funny-looking PCB traces or an optical filter using thin layers of dielectrics having different indices with carefully selected thicknesses, you can set up a crystal lattice so that it will act as a filter on wavefunctions that selects electron (states) for speed/energy (and position?)...

So far so good but when I want to connect these ideas to actual voltages and current flows have trouble understanding if we're looking at the wavefunction of a single electron in the lattice, or if it's already ensemble statistics; whether the electron not being in the conduction band means it is localized near a nucleus or something more abstract and QMy; is the conduction band a place, a state, the set of states for which some energy operator gives a result in an interval??

Quote
It's a levels-of-abstraction thing.  Bandgap is a material property, that applies more or less everywhere within the solid.  It's not a geometric constraint, a boundary condition; it's implicit everywhere.
The bandgap has to be a property that you can compute for a subvolume of space, just like you can compute a Fourier spectrum for any portion of a signal, right?
If you look at homogeneous portions of the same material you should get equivalent bandgaps, perhaps it will be better defined the larger the portion is.
What happens if the portion you're looking at happens to include a junction of dissimilar materials (different doping? different crystal?)  The bandgap property won't give you the full picture.

Also is your electron's wavefunction playing ball with you and staying within that portion you're looking at to compute the local bandgap, or does it extend everywhere upto and including your breakfast cereal?

How exactly do you slice the problem so that it makes sense and is amenable to computation?  Maybe this is where the Boltzmann statistical approach adapted to QM comes in.  Why is this stuff so complicated???

I know that you can (kind of arbitrarily?) use a Gaussian (or other) kernel to cancel your wavefunction outside of a finite zone and partially localize your particle, maybe you just have to make sure the wave packets smaller than your lattice zone to be able to define a meaningful transport code...

I started reading the Feynman on how electrons propagate in a semiconductor.  So far he uses a separate state for the electron being in each point of the lattice, but it's a bit abstract. 

Quote
The bandgap is an energy thing, so it's skewed by electric potential across the junction.  A PN junction in forward bias, has more energy at one end than the other, which results in excess carriers being generated there, which flow along the gradient.  In reverse bias, the opposite does not occur
One difficulty is that the charges have their own potential, yet the potential affects the motion of the charges.  The introductory examples (such as the electron in a potential well) have a single particle subject to an externally defined, static potential.  But in a device the charges will have some distribution that is not known beforehand, and that will change the potential.

Looks like one has to start use a charge position and velocity distribution to implicitly define a distribution of wavefunctions that can be timestepped using computed potentials a la particle-in-cell.

Quote
Bandgap can be varied along a junction, by alloying the semiconductor (heterojunction), or by joining it to entirely different materials (schottky junction), assuming they are mechanically and chemically compatible of course (for which if not, chemical reactions occur and it's really just some other material in the interface; or they crack or delaminate and it's not a junction at all).  The bandgap then varies with position along the junction -- again, it's a material property at each point, it's not a physical gap -- and according to rules (Fermi energy is the same everywhere, I think?), the actual band edges can move around, particularly near such a boundary.  Which is how schottky diodes work, a rectifying junction where the electron gas within a metal (Ef lies in conduction band; valence and conduction bands may also overlap) skews the bandgap much as a PN junction would (I'm not remembering at the moment exactly how this works, apologies), but there is an immediate and unlimited source of electrons in the metal, so the switching speed can be extremely fast (also where the old term "hot carrier diode" came from).  There can also be a resistive schottky junction, where the bands overlap with a skew such that free holes/electrons are present at the surface of the semiconductor and therefore it is essentially resistive there (or something like that? maybe these cases are backwards, lol).
It sounds like there is a way of thinking about this bandgap business that you have mastered where you don't have to worry about QM details but you can understand how different kinds of junctions work in practice.

I've just read "The transistor, its invention and its current prospects" by Hogarth (1973) where he retraces how people weren't able to figure out properly how BJTs work, even though they were using QM, until some people started taking "minority carrier injection" into account and apparently "transit times" make the thing work.  Almost sounds like some QM-ignorant transport model with the proper types of particles could work...

Quote
The stuff inbetween the electrodes doesn't depend on bandgap, it's just boundary conditions; the vacuum triode and FET are very similar devices, generally speaking, the main difference being the ballistic transport (free electrons in vacuum!) of the vacuum triode give the 3/2 power law, while for diffusive transport, it's a solid 2.
So ballistic transport means no lattice collisions so accumulating acceleration, and non-ballistic = drift = speed proportional to field?  What's thermal about the drift?  Isn't that diffusion?

Quote
Most semiconductors exhibit impact ionization (and subsequent avalanche breakdown) at high electric field, limiting carrier velocity to thermal drift levels; higher-Eg materials can stave this off though, and experience ballistic transport, analogous to the vacuum device.  Although I don't think this has much consequence on FET characteristics, as it would still be thermalized where control is being done (in the channel under the gate, versus in the depleted bulk/drift region).

(snip)

Perhaps if someone made a SIT (static induction transistor) from GaAs or whatever, we could observe the 3/2 power law at high bias? lol
Thanks again for giving a very rich overview, this gives lots of rabbit holes to explore.

Next steps for me: Keep reading about this bandgap stuff until it makes sense.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #10 on: February 09, 2024, 06:34:52 am »
It's starting to make sense, but slowly.  I'm aware of the basic concepts of QM such wave functions and how position, velocity and other properties can be observed from them using inner products, and how you can have delocalized (plane wave-like) wavefunctions vs. more localized ones (Gaussian wave packets), however the pieces don't fit together yet in my head.  Looks like there is no way of understanding bipolar transistors by skipping this stuff.

If I follow, just like you can make a microwave filter from funny-looking PCB traces or an optical filter using thin layers of dielectrics having different indices with carefully selected thicknesses, you can set up a crystal lattice so that it will act as a filter on wavefunctions that selects electron (states) for speed/energy (and position?)...

Something like that, yes.  Of course, you can't make crystals from other than atoms, and only at the spacing and potential of any particular atoms that like to stick together, and other things with their orbitals and stuff; it's even worse than (ordinary) chemistry because chemistry deals with molecules, the analysis can be awful but the influences are limited to within a molecule or between some; condensed-matter physics deals with the extended interaction of whole solids (or liquids as the case may be), a problem which is intractable in general -- since you can implement a computer in so many bits of condensed solid and thus implement the halting problem.  But even aside from pathological cases like that, it's a difficult space to work in...


Quote
So far so good but when I want to connect these ideas to actual voltages and current flows have trouble understanding if we're looking at the wavefunction of a single electron in the lattice, or if it's already ensemble statistics; whether the electron not being in the conduction band means it is localized near a nucleus or something more abstract and QMy; is the conduction band a place, a state, the set of states for which some energy operator gives a result in an interval??

Voltages correspond to potential barriers, and usually energy, of course it's not just anywhere that that energy manifests, you need a specific barrier for the direct correspondence.  Like Vf of a diode vs. Eg, or in turn, the emission wavelength of an LED.  But there's no characteristic energy associated with reverse bias, because it's field distributed along the junction thickness, and carrier motion is (mostly) thermalized.

The bands are permitted states for an electron to occupy.

You can ~mostly~ do QM as if particles have internal state -- I'm not sure what exactly would apply or conflict with that in a solid, but the most visible distinction is the Bell inequality, which is as fundamental as the polarization of light; photons don't have some absolute orientation, it's relative to the system, and the "dice roll" occurs at the point of detection.

Or you can do it with pilot wave theory, where positions are definite but the probability is a wave the particles bounce around on-- well, that's an overly simplistic description, but there's something there.

BTW, the PBS Spacetime episodes on QM and interpretations and etc. are quite good, very superficial of course, they can't go into mathematical detail in a popular video, but they do introduce and discuss concepts very well.


Quote
The bandgap has to be a property that you can compute for a subvolume of space, just like you can compute a Fourier spectrum for any portion of a signal, right?
If you look at homogeneous portions of the same material you should get equivalent bandgaps, perhaps it will be better defined the larger the portion is.
What happens if the portion you're looking at happens to include a junction of dissimilar materials (different doping? different crystal?)  The bandgap property won't give you the full picture.

Also is your electron's wavefunction playing ball with you and staying within that portion you're looking at to compute the local bandgap, or does it extend everywhere upto and including your breakfast cereal?

How exactly do you slice the problem so that it makes sense and is amenable to computation?  Maybe this is where the Boltzmann statistical approach adapted to QM comes in.  Why is this stuff so complicated???

Something like that.  Fourier is used early and often in statistical mechanics; you do analysis in terms of wavenumber, reciprocal space, and so you're looking at the extended wave function of an electron (or hole or phonon or etc.) over the solid, or the region in question (often infinite, or periodic boundary conditions, so it can be summed or integrated over).

Some of that is probably historical, familiar to those working with x-ray crystallography, which precedes QM by a bit, but it also works out in QM.

As long as the system is linear, it doesn't matter if you're looking at a Gaussian packet, or sine waves or whatever; it's a superposition of any equivalent set of functions, that gives the same overall wave function in the system.  The main use is to probe characteristics of the system, I think; it's easier to do, say, a transient step response, or a time or frequency kernel like a Gaussian, to play with things like energy barriers, tunneling and that.

When you get down to real problems like the hydrogen atom, you solve for characteristic modes of the system, and the point charge causes radial energy levels (higher energy = "higher orbit"), which of course must be quantized so that the electron matter-waves exhibit spherical harmonics, like the (EM) resonant modes of a wire loop, but in 3D not just confined to a plane, so you get different axes of modes (which is where the quantum numbers n, l, m, m_s arise).  Which are called orbitals.

And when orbitals overlap, and multiple electrons are involved, you get molecules, and even crystals, and so on.

But yes, for semiconductors, it's largely the ensemble properties.  There is a density of states function (how many electrons can occupy levels), the bands themselves (at what energy levels electrons are allowed), and together with Fermi-Dirac statistics, determines occupancy, with F-D giving the placement rule, energy levels from the bottom up are filled, much like balls in a pit, but in the abstract, these are states throughout the material.

As for scope, like I said, infinite or periodic, but much as we use Fourier transforms in signals, it's a convenient lie, and scope is limited in practice, usually by scattering and other interactions, basically what makes it a thermalized population.

At thermal energies, typical electron "sizes" are like a hundred atoms across or so.  At higher energies, deep in the valence band, of course, they can be more closely localized, which is how they stay locked to atoms.

For semiconductors, generation and recombination, plus thermal drift, limit travel distance to some 10s of µm in Si.  Understandably, the first transistors weren't easy to make; but we managed. :)


Quote
One difficulty is that the charges have their own potential, yet the potential affects the motion of the charges.  The introductory examples (such as the electron in a potential well) have a single particle subject to an externally defined, static potential.  But in a device the charges will have some distribution that is not known beforehand, and that will change the potential.

The overall charge neutrality of a crystal helps a lot, AFAIK, so that particle-particle interactions can be ignored.  That is the main crux of molecular analysis, of course.


Quote
So ballistic transport means no lattice collisions so accumulating acceleration, and non-ballistic = drift = speed proportional to field?  What's thermal about the drift?  Isn't that diffusion?

I'm probably mixing up or misremembering terms here, but, there are several things going on.  There's the distribution of thermal energies (where you get figures like, what was it, ~10^5 m/s electron velocity, from), there's mean drift (~cm/s at typical current densities), and ballistic basically means going faster than [thermal], also the statistics related to it which then determine V(I) characteristics.  How free from scattering it is (in ballistic transport), depends; scattering events may not draw enough energy to cause velocity saturation, and the energy barrier for alternative dissipation (usually impact ionization) varies between materials.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: berke

Offline berkeTopic starter

  • Frequent Contributor
  • **
  • Posts: 259
  • Country: fr
  • F4WCO
Re: Semiconductor gurus, how do I model a 1D bipolar transistor?
« Reply #11 on: April 02, 2024, 01:18:58 pm »
Quick update.  I haven't dropped this topic.

I've been (re-)reading introductory material on quantum mechanics, toyed with some FTDT Schrödinger propagators.  In parallel, I've started reading and documenting the first version of Archimedes.  It's an unsuprising random charged particle propagator, that solves Poisson using relaxation and edge boundary conditions, then moves the particles around with scattering.  The key element is that the scattering probability depends on the momentum.  I don't really need to understand how exactly scattering depends on k, I'm happy to take that as a given material property.  The code would give a heart attack to any software engineer, but at least it's basic C (written in F77 style).  There are some extras such as a modification of the computed electric field to approximate quantum effects, but I don't need that for a tabletop transistor.  The main stuff is:
- A regular grid
- Superparticles (representing statistically sufficient numbers of identical but non-interacting particles)
- The E-field (quasistatic)

The main processes are:
- Derivation of potentials by relaxation (I think it's a SOR variant but not 100% sure)
- Enforcement of potential boundary conditions (by simply overwriting the potentials at the edges)
- Computation of the electric field (with the optional quasi-quantum adjust-
ment)
- A media process (to be explored further)
- Propagation of the superparticles
- Energy-dependent random scattering processes
- Particle creation and destruction at the edges
- Bookkeeping

In parallel, I'm brushing up on GUIs and OpenGL.  I usually produce VTK files and look at them with Paraview but it's cumbersome and I'd like to be able to adjust parameters in real time.  I've found the egui + glow combination to be really nice.

I'll finish with a little rant about photons.  One day, when I was younger and doing some kind of internship at INRIA, I said something to the effect of "QM is bullshit" more specifically thinking about "RF" photons.  Today I think I understand a bit better how photons are modeled in quantum theory (called QED)... they're basically field modes.  The wavefunction for N particles is already computationally bad enough (with a dimension of 6N plus time), now these crazy physicists quantize the whole goddamn continuous field.  Anyway, looks like some people way smarter than me are skeptical about the physical existence of photons, example one Peter R. Holland, so maybe my comment wasn't that stupid after all.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf