Author Topic: Reverse-engineering a late-70's Fire Control Computer from an M1 tank  (Read 15085 times)

0 Members and 1 Guest are viewing this topic.

Online D StraneyTopic starter

  • Regular Contributor
  • *
  • Posts: 230
  • Country: us
Here's an interesting item where I got lucky (thanks to a permissive seller) with an extra-low offer on eBay.  This is a fire control computer from an M1 Abrams tank, designed to do complex ballistics calculations for hitting a target that take many environmental factors into account.  The particular unit I have was built in 1984, but the official National Stock Number (https://www.wbparts.com/rfq/1220-01-076-6745.html) dates from 1979, so this was likely designed at some point in the late 70's.  The manufacturer is Computing Devices Corp. in Canada, which is now part of General Dynamics.




My photos aren't great, and it's hard to convey just how heavily-built the enclosure is: the whole package is about 30 lbs (14 kg) - I'll discuss one possible reason for that later.  Let's look at the cards inside.


...and the set of very nicely twisted wires that run between the connectors and the backplane:


CPU board

The large purple ceramic package in the middle is the 16-bit Texas Instruments SBP9900 processor, which is a bipolar variant of its (MOSFET) TMS9900 processor.  There's more info from CPU Shack about it here: https://www.cpushack.com/2015/02/05/ti-tms9900sbp9900-accidental-success/
There's a little bit of digital logic surrounding it for interfacing, mostly buffers (54LS365), plus an oscillator in a metal can and some resistor arrays.  The other really noticeable parts here though are the 4x CDP1822 chips.

These are 256-word x 4-bit static RAMs, which are most likely wired to give it a total memory capacity of only 256 16-bit words.  I believe the TMS9900 uses external RAM heavily as opposed to having a large number of internal registers, so the programmers likely had to be efficient with memory here!

ROM board

This board has an array of custom-part-number chips, which seem to be the program storage.  To be pedantic, though, it's possible some of this is data storage as well: with such a limited amount of RAM, I wouldn't be surprised if the designers leaned heavily on hard-coded lookup tables for calculations, to save memory.

Looking up the part number shows an NSN for these, but with no info other than that they're "digital microcircuits".  Yep, very helpful.


The reason I'm confident that these are program memory, and none are additional RAM, is because these ICs are constantly being power-cycled.  The 6 transistors along the top edge of the board, and the 54LS06 6x open-collector inverter that drives them, gate the 5V power to each column of ICs.  An address decoder on a different board (I'll go into more detail about that later) selects which column is powered up at any one time, based on the memory addresses that are being accessed.  I don't know whether this is a power-saving measure, or whether it's a radiation-hardening feature (I'll also get into that more later).  I also don't know if the memory chips themselves have any special features, since I haven't been able to find any info about these part numbers.

Logic boards 1 & 2


There sure is a lot of miscellaneous digital logic on these.  The next post will have a detailed description of how they work, but the basic idea is that they're implementing some basic external I/O and timer functions all with SSI & MSI logic chips.

When you use a microcontroller such as an AVR/PIC or even an ARM-based part, it has addressable I/O ports already built in, where you can read and write full words as well as individual bits to control or monitor external signals.  However, on a CPU like the TMS9900, there's only an address and data bus, and memory accesses: if you need I/O registers to control peripherals (such as channel selection on the ADC, discussed next) or receive button inputs, then you need to create those addressable I/O registers yourself and interface them through the CPU's bus.

On a "normal" computer system from this era, you'd use other support chips meant to work with your processor, such as the NEC D71055 I/O unit sitting in my junkbox, or the Intel 82C84 clock & reset generator - you can see examples of this on the processor boards in the NZ-920 aircraft navigation computer.
There's potential reasons though why the designers didn't follow this path, and implemented their own I/O-register and timer functions from scratch: support chips just may not have been available at the time that satisfied all their requirements of being compatible with the TMS9900's bus control signals (I don't know if they were particularly unusual), available in military-qualified versions with wide temperature ranges and ceramic packaging, and being TTL-based instead of CMOS.  This also might have been an attempt to keep the total number of transistors involved to an absolute minimum / size of the transistors involved as large as possible, to help the radiation tolerance (which is where the TTL requirement comes from too: discussed below).

Normally, integration is the right choice for reliability in common scenarios, such as with thermal cycling: solder joints are more likely to fail than a transistor or interconnections on a monolithic IC, as long as it's properly protected against ESD/overvoltage/etc.  However, the effects of radiation change around the tradeoffs involved.

ADC board


This contains a nice Analog Devices ceramic hybrid, which turns out to be a 12-bit ADC, for reading sensor inputs:


The two very large DIP packages at the middle and left are the Harris(/Intersil) HI1-507-8, each of which is a dual 8:1 analog multiplexer.  The white DIPs are resistor arrays, and the 2x LM110 op-amps and the op-amp in the metal can (LM118) form an "instrumentation amplifier" configuration - these convert differential analog inputs from "outside" into a single-ended signal that directly drives the ADC's input.  The resistor array near the muxes is used to form a differential voltage divider on some (presumably higher-voltage) pairs of inputs before they reach the muxes.

The reason to use 2 of these dual-8:1-muxes is that it allows for 16 total analog input channels, of which 14 are used.  Each mux has an "enable" input, which when disabled, disconnects all internal switches completely and lets the output float.  Therefore, the outputs of both muxes are ganged together, but only one of the muxes is enabled at a time, which essentially adds an extra layer of multiplexing.  The bottom 3 bits of the ADC channel selection drive the mux's selection inputs, and the top bit of the ADC channel selection chooses which mux is enabled, via a logic inverter (in the 54LS05 at the right edge) that creates an inverted copy of this top bit.

One half of each mux is used for "in+" signals, and the other half is used for "in-" signals.  We'll talk later about what these sensor inputs are.

Analog output board

The opposite of the analog input board just discussed is this analog output board.  It uses the AMD DAC-08 (8-bit DAC), along with a bit of digital logic for interfacing, and a 1741 quad-741-op-amp to buffer the output and presumably provide a control loop.  The output stage is in the form of two large power transistors that seem to be arranged in a class-AB configuration, along with some 1N3189 diodes next to them in metal cans, that create a single analog output controlled by the DAC.

The giant power resistors are 4.99Ω each, in series with the incoming bipolar (both positive & negative) power rails to limit the max. current and/or share some of the power dissipation with the transistors themselves.  The conduction cooling scheme of the whole computer unit is particularly obvious here, with the power transistors attached to a metal stiffener plate that spans the whole board.  When the card is fully inserted into the computer, this metal plate is pressed up against the case at the left and right edges, for transferring heat to the case.

Here's some close-up shots: I'm guessing the series inductor is involved with output stability somehow, but didn't fully map out the circuitry here as its general structure was pretty obvious:


Voltage regulator board

That black power supply box, visible in the "open case" photos at the top and which we'll discuss in more detail in a future post, provides a regulated +5V and an unregulated ±19V (roughly).  Most of the analog stuff requires a properly-regulated ±15V, though, and the ADC in particular needs -5V in addition to +5V and +15V.  This "post-regulation" of the analog supply voltages seems to happen here.

There's 3 power transistors, all of the same type, but not much information about them.  Next to the middle and right-hand transistors, in metal cans, are an LM105 and LM104, which are positive and negative regulator controllers meant to drive external power transistors.  The left-hand power transistor has no IC nearby but could be operated either off just a couple smaller transistors and a zener diode, or by the 1741 quad op-amp above it.

The 1741 op-amps also may or may not be involved in creating outputs that track properly at power-up and power-down.  The LM139 quad comparator at the top provides a "power good" signal used to reset a bunch of logic on the digital boards, and presumably to reset the CPU as well: there's probably one comparator for each voltage rail, with the open-collector outputs wired-OR'ed together.

I didn't bother tracing all the circuitry on this board either as it seemed relatively boring, but here's more up-close shots:


Background and purpose
Ok, so now that we know what this fire control computer looks like on the inside, and how its individual pieces work, what does it do?  How does it integrate with the outside world?

Fire control
Firing a projectile at a target, and predicting the initial angle(s) that are needed to hit it (even when the initial velocity is known), is not a trivial problem.  Ballistic motion with the effects of gravity is a standard high-school physics problem, and can be easily calculated and looked up in "gunnery tables", but all kinds of other effects can ruin your aim:
  • air resistance (notably absent on the basic physics problem sets) and its changes
  • the speed and direction at which a target might be moving (which due to the projectile's transit time means you need to aim ahead of it)
  • wind, which will blow the projectile off-course
  • distortion of the gun barrel due to temperature changes, as shots are fired and it heats up, or permanent changes over time
(This isn't meant to abstract away the fact that "correctly hitting a target" here usually means "killing people" rather than some purely-theoretical thought experiment: just shedding some light on the technical aspects behind the often-horrifying applications)  Anyways, I found some pieces of a manual online, "TM 9-2350-255-10-3 M1 ABRAMS TANK Operator's Manual", which tangentially describe the fire control computer.  There are a couple diagrams showing the separate control panel for it, which are particularly useful:

This ties in nicely to the question of "what are all those ADC channels measuring?".  It looks like ammo type (which affects initial velocity & air resistance), ammo temperature, and barometric pressure (which likely affects air resistance) are measured and entered manually, while there are sensors for the air temperature(?), crosswind (not sure how it's measured), range to target (with a laser rangefinder), position/angle (with a pendulum hanging from the roof), and turret position in 2 axes.

There's also a "muzzle reference sensor" (MRS) which involves a small device with optical windows at the end of the barrel, meant to correct for non-straightness of the gun barrel.  I'm not sure exactly how this works.  My best guess is that the MRS has a light source (found a reference to a different MRS having tritium in it, like light-up watch dials) and there's a viewing window with a reticle: the viewing window is then manually moved (turning some potentiometers in the process) to line up the MRS in the middle.

Radiation hardening
The official NSN info for the fire control computer, linked at the beginning, mentions "nuclear hardened features".  It seems like this is still at least sometimes a requirement for military electronics today, but would probably be even more so in the Cold War times this was designed, when getting nuked was on everybody's mind.  There's actually nothing crazy in here, no super-special kinds of rare things you'd see in satellites or deep space missions meant to deal with decades of constant ionizing radiation, but some of the design choices make sense in this context.

Radioactivity and electronics do not play well together.  Low-energy ionizing radiation, well....ionizes: it knocks electrons out of their orbits, adding charge carriers (both a free electron and a hole).  For semiconductors, which depend on precisely balanced levels and types of carriers, this isn't great - the effect can be similar to spikes of leakage current through a device that's supposed to be "off" or across an insulating layer.  From the world of space exploration, there's so-called "Single Event Upsets", where an ionizing impact will flip a memory bit / change a logic state / create a voltage pulse.  When this happens with standard CMOS processes, it can also cause "latchup", by activating a parasitic SCR formed by the structuring of the substrate and the transistors within, which crowbars the power supply to ground - obviously a bad thing.  Silicon-On-Insulator (SOI) processes or things like diamond substrates avoid the latchup issue.

More-energetic ionizing radiation can do permanent damage to the semiconductor by displacing atoms inside their metallic silicon lattice.  I'm guessing this is less of an issue here though, an more of an issue in constant-background-radiation applications.

Overall, bipolar transistor logic is more radiation-tolerant in general than CMOS logic: my non-expert understanding is that this is partly because latchup doesn't apply, partly because there's no thin insulating layers (such as between the gate and channel of a MOSFET) to be disturbed by unexpected leakage currents, and partly because a small leakage current spike is less likely to change the state of anything when there's already larger currents flowing.  This explains why the SBP9900 CPU is used instead of the NMOS TMS9900, and why all the logic is "LS"-series TTL...as well as possibly why the I/O and timer functions were implemented in SSI logic instead of using integrated peripheral chips.  Large-cell SRAM is also supposedly more robust against radiation, which would help explain the low-density SRAM chips used on the CPU board.  The extra-thick steel case also might be for radiation shielding as well as physical robustness.

One reference I saw to rad-hard electronics was about the D-37B Minuteman missile guidance computer, which had "...radiation circumvention techniques that removed all electrical power from the power distribution system, including decoupling capacitors, in less than 1 microsecond and restored to the specified voltage in a few microseconds upon command."  It sounds like powering down during an acute burst of ionizing radiation can help (by not giving the electronics the chance to malfunction and damage themselves?).  I'm wondering if this might be part of the reason the ROM chips not currently in use are powered down, if the program memory is considered particularly sensitive to damage, depending on what type of ROM it actually is "under the hood" (I can't imagine selectively-blown-fuse ROM being particularly sensitive, for example).

Anyways, hope you enjoyed.  When I have some time later I'll post a more detailed look at the logic boards, and also the power supply.
« Last Edit: February 18, 2024, 02:09:43 am by D Straney »
 

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 7972
  • Country: us
Re: Reverse-engineering a late-70's Fire Control Computer from an M1 tank
« Reply #1 on: February 18, 2024, 03:36:07 am »
Well there's a blast from the past--quite literally!   I've actually used and fired this system or the very similar earlier one, as I was part of an armor battalion in Germany in 1984. The M1 tank was brand new and we were seeing them for the first time.  This type of fire control computer didn't originate with the M1, though.  AFAIK, it's first appearance was on the M60A3, which shared the 105mm main gun and many, many features with the early M1 tanks.  I may still have a crewmembers training manual for the M60A3 somewhere.

Relying on decades old memory I might answer a few of your questions. 

The air temperature and wind were measured with an actual external sensor on top of the tank.

The muzzle reference would be part of the motion stabilization system that kept the TTS (tank thermal sight) and optical sight on target even as the tank was in motion across terrain.  It probably compensated for droop as well, but IIRC the main function was to control the firing and time it so that the electrically-primed rounds would be energized right as the gun barrel was aligned with the sights.  During stabilization operation with a moving tank the 2-ton gun would oscillate a bit at 1-2Hz (I'm guessing from 40-year old observations...) and for best results you'd want it to fire right when it's compensated bore position coincides with the crosshairs on the compensated sight.  I hope that makes sense.

Lookup tables are definitely the most likely way that solutions were computed, mostly because it would be faster but perhaps also because the data was obtained experimentally and that's just what they were used to doing.   I can tell you from more direct experience in artillery that lookup tables were definitely "the thing" in those days and they were still in the process of deprecating paper lookup tables. 

During this era, tanks still carried multiple types of ammunition but the main one was the APFSDS 5000+fps armor-piercing rounds and that was the one they would have been most concerned about accuracy.  IIRC (again, disclaimers due to time) all of the gunnery practice and qualification was done with APFSDS training rounds (no DU or tungsten).  When they switched to the Rheinmetall 120-mm smooth-bore guns, I think they essentially forgot about any non-saboted ammunition.

Thanks for showing that off!

« Last Edit: February 18, 2024, 04:38:35 am by bdunham7 »
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 
The following users thanked this post: KE5FX, D Straney, Cavhat, quince

Online D StraneyTopic starter

  • Regular Contributor
  • *
  • Posts: 230
  • Country: us
Re: Reverse-engineering a late-70's Fire Control Computer from an M1 tank
« Reply #2 on: February 18, 2024, 04:52:55 pm »
Wow, very interesting, thanks!  That does make sense with the timed firing, kind of like old propeller planes with mechanical systems to time shots for only when the propeller was in the correct position (to avoid shooting their own propeller).  Hadn't thought about the barrel flexure (the AC component as well as the DC component), but that makes sense too - have been studying beam theory, and can imagine how something like a gun barrel with such a long aspect ratio, no matter how stiff you make it, having some nice cantilever motion to it.

So it sounds like maybe the muzzle reference sensor had a laser or something similar, and a sensor in the turret that would detect when it was lined up.  Could be a good use for one of those four-quadrant photodiodes.

Also I forgot to mention, there's a mystery compartment on the side that turns out to be a battery (according to the NSN record; my other theory at first was a Geiger–Müller tube but that didn't make sense, it still measured a few hundred mV):


I didn't trace out the connections but I'm guessing it keeps the RAM active while the computer is unplugged.
« Last Edit: February 18, 2024, 04:55:52 pm by D Straney »
 
The following users thanked this post: fourtytwo42, Cavhat

Online D StraneyTopic starter

  • Regular Contributor
  • *
  • Posts: 230
  • Country: us
Re: Reverse-engineering a late-70's Fire Control Computer from an M1 tank
« Reply #3 on: February 18, 2024, 08:34:21 pm »
Ok, let's talk about the...

Logic boards
Here's each chip's function and roughly what it does:


The diode arrays are 'custom' part numbers 12279505 (NSN 5961-01-087-9128) which are used to clamp external signals between +5V and gnd.  There's also some discrete diodes scattered around that seem to be connected across the power rails as well, probably Zeners to protect against overvoltages on the rails caused by clamped signals.  One of these has blown up on the logic board #1: you can see the charred spot near the top edge.  When I opened the airtight(?) case for the first time, the blast of however-many-year-old "burnt electronics" smell that hit me was unexpectedly powerful: was glad I had ventilation.

The blue resistor arrays are all 4.7KΩ isolated resistors.  These are used as pull-ups/pull-downs for external inputs, and also as the pull-up for a giant "main reset" signal coming from the voltage regulator board - this "main reset" is probably a "power good" signal from the comparators on the voltage regulator board, as discussed earlier.

Address decoding
A good place to start is with all the 54LS138s hanging around.  These are "3:8 decoders", which drive one of their outputs low, corresponding to the binary number on their 3 address inputs.  You can think of it as a binary-numbering to "one-hot" translator.  This kind of function here is essential for doing the address decoding: selectively enabling different peripherals when the correct memory address is present on the CPU's address bus, to allow it to read or write register values when desired.

In the case of read operations, a 54LS138 output will control an active-low "enable" pin on a buffer connected to the CPU's data bus, to set the data bus state to the "read data".  In the case of write operations, a 54LS138 output will control an active-low "latch" or "clock" pin on some flip-flops with their inputs connected to the CPU's data bus, to store the data bus state (the "write data") in the flip-flops.

Each 54LS138 only can decode 8 addresses, but they also have 3 enable pins, all of which are required to be in the correct state (one low, two high) to enable any of the outputs.  This gets used for additional bits of address decoding, and to distinguish between read and write operations (as shown in the example diagram above).  You can also use cascaded decoders, where one decoder on address bits [5:3] will selectively enable 8 more decoders on address bits [2:0], for example - however, from tracing the connections, this isn't done on these boards.

ADC data & external register
Here's an example of how this works on logic board 1:

When there's a CPU read from the memory address assigned to the ADC, one of the address decoders drives the "Read 0" signal low and enables the 54LS365 buffer, which puts the ADC result on the data bus.  When there's a CPU write to the external I/O port shown here, one of the address decoders drives the "Write 0" signal low and latches the data bus's data into the 54LS174 register.

Other logic board 1 I/O
The other "simple I/O" like this on logic board 1 includes...
  • ADC data (in)
  • 12-bit output register (out) - this might be for controlling the 5-digit display on the control panel, as opposed to the bit-wise outputs we'll see later
  • an ADC control register (out) - this contains the 4 bits that set the ADC's mux channel, as well as the "start conversion" bit that runs directly to the corresponding ADC pin
I'm not sure exactly what the 54LS74 dual D-flip-flop does on this board; didn't get quite that far.
The bottom-right address decoder is what drives the 6 individual signals to the ROM board that power up one column at a time of ROM chips when accessed.

Event timers, interrupts, and button(?) input
Along the left edge of logic board 1 are 4x 4-bit 54LS161 counters, which run off of what I'm pretty sure is the CPU clock, and are cascaded to form a much larger effective counter.  The outputs aren't used directly for anything, but the "carry out" signals from two of them are picked off and used for timing events on logic board 2.  I'm not sure if the counters auto-reset or not (a trace ran under a chip and I couldn't find where it came out), but heavily suspect that they do, to continue the periodic event timing (with maybe a "disable" control from software).  The "pre-load values" that these counters are reset to each time sets the timing intervals: you can see some solder jumpers next to the chips, which allow selecting between a few timing options when the board's assembled.

Let's look at what these periodic "event" signals are used for:

Logic board 2 contains a 54LS148 "priority encoder", which is meant to be used for processor interrupts: when one or more inputs is driven low, it drives its own IRQ output low, and puts a 3-bit number on its output corresponding to the highest-numbered input that's active.  This is used to select between multiple concurrent interrupts, and select the highest-priority one to service first.  In this case, though, only two of the inputs are used; #5 and #6 for some reason (maybe to make PCB trace routing easier).

There's two J-K flip-flops (from the one J/~K flip-flop chip on the board), which both generate interrupts.  Each one has its output latched on by an event of some kind, and is then cleared under software control by writing to a particular memory address (the "clear" pins are connected to outputs on one of the address decoders).  [It doesn't matter what data is written; any write operation will clear the interrupt]
One of these interrupts is triggered by the event timer, likely as some kind of "polling reminder" or something similar for timing software execution.  The other interrupt is triggered by an external pulse on a differential input to one half of the SNJ5115 differential receiver, which is probably connected to one of the pieces of external equipment in the "outside world".  I don't know which of the sensors etc. is generating pulses, but whatever it is, it appears to be important enough to cause a CPU interrupt.

The other (more frequent) event timer signal is used to sample (via a D-flip-flop) a single-ended external logic input, which could be something like a button connected to ground.  When this external input is low (button is pressed, for example), it latches a second D-flop-flop's output as "active", which then can be read via the discrete bit inputs (discussed next).  Just like the interrupts, writing to a specific memory address then clears this latching "button pressed" indication.  I'm assuming this is connected to an important button on the control panel where you don't want to miss it being pressed, but maybe the control panel has its own keypad decoder internally which generates a pulse whenever any of the keys is pressed.

Bit-wise I/O
Most of the remaining logic on logic board 2 is dedicated to bit-wise I/O, through 54LS251 8:1 addressable muxes (for reading individual bits) and DM9334 addressable latches (for writing individual bits).  This differs from the "word-style" 12-bit registers on logic board 1, where the entire data bus is latched at once: instead, here only one bit of the data bus is actually used, and each bit has its own memory address.

Why would you do it this way, and waste memory space by only using one of the 16 data bits at each memory address?  Well, first off, with a 16-bit address space (65K words) and with very little RAM, you've got no shortage of address space to use for I/O, so there's no need to conserve.  More importantly, the address-bus/data-bus scheme of processor access to peripherals is optimized for working on full 16-bit words at a time, and doesn't work very well when you want to control individual bits.

On a modern microcontroller, there are dedicated instructions for changing only one bit at a time in a multi-bit output register.  If you can read the state of your output registers, you can also do a "read-modify-write" operation to set a single bit: read in the output values, do some bit masking math, then write the modified value to the outputs again.  However, this requires being able to read back your output register states.  In this computer, with its I/O logic built from individual logic gates, adding a "read" capability to the output registers would require a whole lot more address decoders and buffers - so instead, it's easier to just assign each bit its own memory address, and easily write to individual bits with minimal hassle. 

Reading individual input bits from a full 16-bit-wide input register is easier than writing bits, but it still requires doing a "bit mask" AND operation in software, so assigning each input bit its own memory address trades off (plentiful) address space for less code required in the program ROM.

Hopefully this explanation makes sense, and the schematic below will help.  All the muxes and latches are driven by the same lower 3 bits on the address bus, and address decoders select which latch or mux to enable by looking at a higher-order set of address bits.

In the middle is the simplest section, which just provides a set of almost 32 individual input bits.  Most of these are from external inputs, protected via the diode arrays and a pull-up resistor.  A few of the bit inputs are internal states though, such as the "latched button-press(?)" signals and the individual interrupt signals described above in the events & interrupts.

The bottom section is also fairly simple, with external bit outputs.  6 of these outputs are driven by 54LS365 buffers, while 4 of the outputs are driven by 2N2907 PNP transistors.  These transistor-driven outputs are likely small loads like lamps (in the control panel) or relay coils.

The top section is slightly strange.  An external differential signal comes in through the other half of the SNJ5115 diff. receiver, and is fed to the data input of a latch.  All the outputs of the latch are connected to the corresponding inputs on a mux.  What this means is that writing to one of these 8 memory addresses for this latch will ignore the "write data" but store the external-input bit at that address.  These 8 stored-bit addresses can then be read back individually through the mux.  I don't know what the reasons are for wanting to be able to store a bit in one of 8 locations rather than reading it directly, but there's probably some special interfacing process that this makes a lot easier on the software side (esp. considering the limited RAM).  I was wondering if this might be part of a really dumb software-based serial input, but that doesn't make sense for a whole lot of reasons...

It is interesting though to contrast this with more modern equipment, in its self-contained nature.  Systems today are probably much more heavily computer-controlled, and so a modern fire control computer probably communicates via serial links to a bunch of other computers and electronics systems onboard.  This computer, on the other hand, does its job with just a control panel interfaced via discrete digital signals and analog sensor inputs.

Differential pulse outputs
Finally, a strange part of this that I don't fully understand is the 8 differential drivers at the right-hand edge of logic board 2.  These drive external signals, possibly to a motor controller that adjusts the turret and gun position?  Each differential driver has multiple inputs which are AND'ed together; some of these are ganged between all the diff. drivers and come from one of the D-flip-flops, which is latched high by an address decoder signal.  There's also some involvement with a couple signals on the CPU board that I wasn't able to trace.  My best guess is that this is used to generate output pulses when a memory write happens to specific addresses.  Again, having some of this functionality in hardware offloads work from the program code with only a few extra ICs.  Output pulse generation on command would probably be useful for (as speculated above) driving the motor controller, "telling" it to move a certain amount on a specific axis.
« Last Edit: February 18, 2024, 08:36:27 pm by D Straney »
 
The following users thanked this post: SeanB

Offline SeanB

  • Super Contributor
  • ***
  • Posts: 16349
  • Country: za
Re: Reverse-engineering a late-70's Fire Control Computer from an M1 tank
« Reply #4 on: February 19, 2024, 07:16:00 am »
ROM board those transistors are to save power, by depowering any block of memory that is not being used, as those likely are all mask ROM, and in TTL they are power hogs. They will only be powered up when the processor is going to access that block, serving as a separate address enable for that section as well, so one less decoder is needed. Note no decoupling capacitors other than the 10uF 25V CTS12 wet slug tantalum on the bottom by the connector, and the address decoder there driving the transistors, likely a 2N2905 transistor switching the rom blocks on.

Seen that done in a lot of avionics, though the one I worked on the most used 2 5V rails, each capable of around 20A, and both identical, with the one being switched at 400Hz, powering up the TTL ALU and logic parts, only when the ADC system was active, at the peak of phase A on the 400Hz power bus, as otherwise you would overheat the unit. 1960's designed SMPS units, using BUX20 power transistors, and air cored inductors with a tap running around 1MHz, almost all discrete transistors in it, and with really little capacitance on the outputs, but a massive amount on the input. Most of the capacitance provided by the 2 100uF 25V tantalum wet slug capacitors on the output, and the 16 layer PCB power and ground planes on each of the cards on the backplane. Even with fans running on them they ran hot enough when on test, that you needed 2 120mm fans running onto the case, to keep it below 70C. In the airframe they had their own AC feed directed over them, first in the line. 16kg of computer, and more than half of that was ceramic DIP14 and 16 packages.
 

Online D StraneyTopic starter

  • Regular Contributor
  • *
  • Posts: 230
  • Country: us
Re: Reverse-engineering a late-70's Fire Control Computer from an M1 tank
« Reply #5 on: February 19, 2024, 02:27:13 pm »
Thanks, I love hearing about the details of boundary-pushing systems like that, and the interesting design choices that have to get made.  Makes sense with mask ROM and power consumption, wouldn't be radiation-sensitive at all as far as permanent storage goes, but can imagine the number of individual BJTs it would need inside for however-many-KB of address decoders having some serious current draw.

Anyways, finally, here's the...
Power Supply
This takes in 28VDC and produces a regulated +5V, plus an unregulated pair of bipolar analog supplies (roughly ±19V) for the circuitry we've already looked at.

Physical layout
Here's what the power supply looks like with the cover removed:

I like the 3D-style construction, with 3(!) separate boards with wires running between them, and some magnetics and a bulk capacitor placed in between the two metal heatsink plates:





Overview
Electrically, the power supply is arranged as a buck converter, followed by a self-oscillating push-pull converter which drives a transformer to produce the (isolated) multiple output voltages.  Here's a schematic:

The 3 boards are split up logically as...

The buck converter:


The black box sitting on the "floor" between the two boards in one of the photos above is the buck inductor.  The buck diode is one of the stud-mount diodes mounted to this side's heatsink plate, and the buck power transistor is a TO-3 package mounted to the heatsink plate too with its leads sticking through the board.

The push-pull converter:


The large metal box hanging in the air between the two boards is the transformer driven by the push-pull converter (T3).  The output terminals that feed the +5V and ±19V outputs are on the top and have wires attached, while all the other terminals are pins that poke through sockets in the board as seen above.
The output diodes are stud-mount devices that can be seen mounted on the heatsink plates in earlier photos, and the two power transistors are (like with the buck converter) TO-3 packages mounted to the heatsink plate.  Next to the power transistors on the heatsink, not visible here, is the power resistor in series with T2's primary, and the thermal switch that gates the buck converter's control power.

The output capacitors and crowbar SCRs:

The metal box directly underneath the input connector, at the bottom of this photo, is the input EMI filter.

I've worked on a variety of electronics R&D efforts, but the most consistent career theme has been power electronics (although usually more interesting stuff than this, like miniaturized high-frequency converters) so I was curious to see how exactly a power supply would be built for a semi-rad-hard environment when switch-mode power supplies were still relatively new.

Power devices
Good power MOSFETs are really what enabled modern power converters, with their easy no-steady-state-current-needed gate drive, and ability to turn on and off very fast with low switching losses.  Here, though, the switching devices are all bipolar transistors, possibly because power MOSFETs weren't yet available as mil-spec parts, and/or because of radiation susceptibility.  As BJTs, these are going to require a lot of base current when on, and are also slow to turn off due to carrier recombination needed.  The reverse-recovery effects of the diodes here also create pretty significant switching losses that also limit the switching frequency - I don't think good "zero-recovery" Schottky diodes were common yet either at the time.  [The inductors in series with the output diodes I think are meant to partially suppress reverse-recovery current spikes, but the details of that are a whole separate topic on its own: go look at Amobeads for a similar principle]  Because of these limitations in the power devices, the switching frequencies are more likely to be in the single-digit-kHz or 10s-of-kHz range, than in the 100s-of-kHz or Mhz ranges expected today.  The push-pull converter measured 13 kHz switching with no load, and the buck converter is probably similar.  The efficiency wasn't great either: when I powered up the supply with no load, it dissipated 6W on its own!

Topology
Splitting up the power conversion into two separate stages here makes a lot of sense for this application.  It needs isolation and multiple output voltages, all of which requires a transformer.  The self-oscillating push-pull converter is a nice simple way to drive a transformer in a robust way with a minimal number of parts, but it also can't be regulated very easily - it works best when left alone to do its own thing.  The voltage stresses on the transistors are also double the input voltage.  Therefore, to regulate the output, the also-relatively-simple buck converter can precede the push-pull, and step down the voltage to a variable degree to regulate the +5V output.

Buck converter & control
The buck converter is controlled by a few discrete transistors; I didn't fully map out the circuit as it was hard to look at the bottom-side traces on the board without removing an absurd number of screws.  All the control circuitry has to be referenced to the emitter of the buck converter's power transistor, to provide its base drive: there's a winding on T3 to supply the buck converter's control power, and I assume there's also some sort of "trickle-power" startup mode where it can charge its own control power from the input voltage, before the push-pull is active.

The strange arrangement of the buck converter, with the inductor on the low side and an NPN transistor (generally better electrical properties than PNPs) means that both the output voltage and the NPN's emitter are "flying": they're moving around relative to the 28VDC input and the +5V/±19V outputs with each switching cycle.  This presents an isolation challenge: how to get control signals from the output to regulate the buck converter's operation?

Regulation is done with the classic 723 general-purpose regulator IC, in a metal can on the push-pull board.  This looks at the +5V output through some remote sense connections on the output connector (used to compensate for voltage drop in the power wiring), and produces a feedback signal on its "OUT" pin.  This sets the voltage on the center-tap of the feedback transformer's primary winding.  Due to the way this primary winding is wired, with diodes feeding it with switching-frequency pulses from an auxiliary winding on main power transformer T3, it produces pulses on the secondary winding with a variable amplitude controlled by the 723's output voltage.  If it isn't obvious how exactly this works or anyone's curious I can draw a diagram to explain it - it's a clever scheme!  Anyways, this feedback transformer produces an isolated train of feedback pulses which seem to get averaged/filtered by the buck converter control circuitry to use as a control for on-time or duty cycle.

Optoisolators are the standard isolation method in power supplies, and if you take apart a laptop or phone charger you'll see an optoisolator for feedback most of the time.  However for "high-reliability" applications like this one, they've got some limitations.  I don't know the details myself but have heard that the insulating material used internally darkens with age, blocking light and decreasing the optoisolator's gain: not good for something that might have to work for a couple decades being active most of the time, or spend 10 years in storage as a replacement part.  Also, again I don't know for sure, but I'm guessing that the phototransistor or photodiode in an optoisolator is particularly sensitive to ionizing radiation: a gamma ray photon or a neutron ionizing one of the silicon atoms is the exact same method by which light is detected in these sensors.  So overall, this is why the use of a transformer for feedback isolation makes sense here even if it's unconventional.

Push-pull converter
This looks like a standard blocking oscillator: the toroidal transformer (T2) provides positive feedback that keeps one transistor fully on until the rising magnetizing current either (1) makes the base current supplied to the transistor insufficient to keep it on anymore, or (2) the transformer intentionally saturates its core and loses coupling.  [This second method was a favorite for very simple self-oscillating power supplies in fluorescent lamps, using square-BH-loop magnetic materials with well-defined saturation points]  When this transistor turns off, the flyback action automatically turns the other transistor on, and the cycle repeats.

Notable here is the negative bias voltage on the center tap of T2: this is probably to put a negative voltage on the base of the transistor that's off, and turn it off faster for lower switching losses & better efficiency.  Some self-oscillating push-pull converters will use base-drive windings on the main power transformer, and so use only one transformer overall: here, though, the use of a separate base-drive transformer allows for both...
  • Controlled transformer saturation as a turn-off mechanism (if that's how it's designed - it may not be)
  • Disable control using an extra winding
The two diodes and a transistor at the bottom of the schematic essentially short an additional winding on T2, which prevents any base voltage from developing and stops any oscillation.  This is implemented as an over-voltage protection, which keeps the push-pull converter from switching when its input voltage from the buck converter is too high.  Because the collector voltage seen by each power transistor is 2x the input voltage, plus overshoot, starting up the push-pull converter when its input voltage is too high would result in much higher-than-expected collector voltages.  This is likely important for edge cases like startup and shutdown in particular, where the buck converter may not yet be operating correctly and the push-pull converter needs to wait until its output is in the proper range.

Crowbar circuits
A common feature of older avionics and military power supplies is a crowbar on the output to protect against overvoltages, caused by any power supply failures, that would threaten to damage the rest of the computer.  On something less repairable like a modern laptop computer where everything's much cheaper and on one board, it doesn't make sense to try and protect the "downstream circuitry" separately from the power supply.  Here, though, where each of the removable boards is seriously expensive, and the whole military logistics system has to handle keeping spares in stock at different locations (esp. remote locations where it might be used), it makes a big difference to have to replace only the power supply vs. the entire computer, if the power supply malfunctions!

The crowbar, if you're not familiar, is simply an SCR across the output voltage, triggered by an over-voltage detection circuit.  If the output voltage goes above some pre-set acceptable threshold, it triggers the SCR and shorts the power supply output, usually blowing a fuse in the process (although I don't see any fuses here).  The crowbar circuit uses a 4.7V zener (the 1N750) as a voltage reference, and has a variable voltage divider to the base of the NPN transistor connected to it.  When the +5V output is high enough that the divided base voltage is roughly 4.7V + 0.6V Vbe, the NPN transistor turns on.  This turns on the PNP transistor, which turns on the SCR gate and crowbars the +5V output.

The ±19V outputs have a single SCR across both supplies, but use two identical trigger circuits (but with different resistor values of course) so that either a +19V overvoltage or a -19V overvoltage can crowbar the entire output.

If you look closely at the board image above, you can see that one of the resistors in each crowbar circuit (marked "swappable" on the schematic) is mounted in sockets on the board, rather than soldered directly.  This would've been done so that the crowbar threshold voltage could be tuned on each individual unit by selecting a resistor value during a manufacturing test - this was probably done to account for tolerances on the zener voltage.  The same is done on one of the resistors surrounding the 723 regulator, to tune the +5V output's regulation target.

Anyways, I hope this all made sense.  Let me know if there's anything that's confusing.
« Last Edit: February 19, 2024, 02:29:38 pm by D Straney »
 
The following users thanked this post: Cavhat

Offline bdunham7

  • Super Contributor
  • ***
  • Posts: 7972
  • Country: us
Re: Reverse-engineering a late-70's Fire Control Computer from an M1 tank
« Reply #6 on: February 19, 2024, 02:59:24 pm »
Quote
Radiation hardening
The official NSN info for the fire control computer, linked at the beginning, mentions "nuclear hardened features".

I wonder whether "nuclear hardened" means radiation,  EMP or both?  The device itself is within the turret of the tank which provides considerable shielding from most radiation.  The shielding is perhaps less effective from high neutron flux but still much better than just being exposed directly.  However, the various wiring and external connections would be susceptible to EMP to some extent.  IIRC, the M1 had the same manual backups as the M60A3, where the entire turrent and fire control system could be operated with hand cranks and pumps without any electrical power at all. 
A 3.5 digit 4.5 digit 5 digit 5.5 digit 6.5 digit 7.5 digit DMM is good enough for most people.
 

Online D StraneyTopic starter

  • Regular Contributor
  • *
  • Posts: 230
  • Country: us
Re: Reverse-engineering a late-70's Fire Control Computer from an M1 tank
« Reply #7 on: February 19, 2024, 05:13:34 pm »
Hmm good point about the EMP aspect.  I'd have to imagine the "nuclear hardness" means some level of both.  It's interesting seeing the Spendex 50 EMP-rated phone with its multiple isolated compartments and heavy surge protection on the I/O terminals; definitely not this level of EMP hardening on the I/Os here, but it also isn't connected to however many miles of phone wire to act as a giant antenna (and has the conductive tank body around it to do at least a little EM shielding too).

Offline Cavhat

  • Contributor
  • Posts: 11
  • Country: us
Re: Reverse-engineering a late-70's Fire Control Computer from an M1 tank
« Reply #8 on: February 20, 2024, 03:40:53 am »
Hi, former 19-series here who served on the IPM1 and M1A1 when my unit was organized as divisional heavy cavalry. Thank you to bdunham7 for providing excellent background information, and I’d like to offer just a couple clarifications.

The MRS simply corrects for heat-induced gun tube droop (the correction is a manual process that must be performed every ~30 rounds or so, if I recall correctly, and definitely when you start noticing all your rounds falling short), while the turret’s gyroscopes provide the fundamental sensor data needed for shoot-on-the-move capability. Also, while APFSDS-T (or “sabot” for short) is king and is used for engaging tanks and helicopters, HEAT (high explosive anti-tank) is still used to engage light-armor targets such as armored personnel carriers.  There’s also the MPAT round which has a programmable proximity fuse, but we never saw anything that sexy.  But yes, the more “exotic” rounds (e.g., beehive) were discontinued after the M1/IPM1 and the M1A1 and newer do not even have an ammunition selector on the gunner’s control panel for that round.

As for lookup tables, one kind of data that definitely exists as a lookup table in that fire control system is ammunition ballistic data by lot number. When rearming, the gunner had to use the ammunition selector in conjunction with the computer control panel (the image shown in the first post) to select the ammunition type and inputs its lot number. I had always wondered how the ballistics data was programmed into the computer in the first place and when/how it was updated with new lots, but I never learned (primarily because I was an end user and not a maintainer of the weapon system).

Thank you both for this excellent teardown and a trip down memory lane!

Edit: clarification on the beehive ammunition availability after the M1/IPM1.
« Last Edit: February 20, 2024, 04:59:58 pm by Cavhat »
 
The following users thanked this post: D Straney


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf