Bit slice was always, in essence, a research tool. While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly. I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.
I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips
Dude, read what I write. "bit slice would always be overrun quickly". Every one of the examples you cite, were either very low volume, or quickly overrun by fast CPUs. Why do you continue to argue about this???
I, and other people, did read what you wrote, and have pointed out the inaccuracies.
The DEC machines were the highest volume (mini)computers of the time. Not research tools.
How long were the bit-slice models sold? Like I said, DEC had the LSI-11 in 1975, using custom LSI chips. They were not designed to be especially fast, but it was inexpensive (relatively speaking). I did a bit of reading and found the VAX-11/730 and VAX-11/725 (same CPU) were the only bit-slice renditions of the VAX line. DEC was using custom LSI for the VAX line, and used bit-slice to build less expensive and much slower versions of the VAX line. So, in reality, this was not a "high performance" machine using bit-slice, but low end economy models! LOL
Arcade machines were widely distributed production machines. Not research tools.
Sorry, arcade machines were not widely distributed in any real sense. They sold a tiny fraction compared to high volume devices like PCs. If you consider arcade machines to be "high volume", then you are right. Bit-slice was wildly successful. But again, how long were the bit-slice models sold, before being replaced with
much cheaperCPU based designs?
Everything was "quickly overrun" in the 70s and 80s, from mainframes downwards. In the ARM/x86 era, it may be difficult for youngsters to imagine, but that was a "pre-Cambian" evolutionary era.
And I'm not a "dude".
Ok, dudett...
Yes, every individual design is quickly obsoleted, but that's not the same as obsoleting a technology, or design approach. Once LSI started producing CPUs with significant processing power, they outpaced what could be done in bit-slice and that technology was effectively buried. Do you actually read what I write? Read the next paragraph carefully.
Bit-slice was made up of TTL logic slice chips. That was faster than CMOS in 1975, when the devices were introduced. But anytime a signal goes off chip, it is much slower than signals remaining on the chip. The carry chain of bit-slice had to propagate through multiple chips giving a fundamental limit to the speed. By 1980 or so CMOS was faster, because the entire CPU could be put on a single die. Why didn't they use CMOS for bit-slice? Because it was effectively dead at that point. They tried ECL, which gave them more speed, but at a huge cost of power. So, used in specialized products with big price tags and lots of power.
This is essentially the same thing that happened with the array processing business. They were cabinet sized machines that cost $200,000 and up, performing 100 MFLOPS (in the case of the machines I worked on). Within 10 years, this technology was available in a chip from Intel. Maybe not 100 MFLOPS, but a significant number. Slave a few together and you now have a $10,000 machine with more performance. The company is now out of business.