So, is it a 1/13/19/39 bit machine?
So, is it a 1/13/19/39 bit machine?A bit like trying to pin down the model year of Johnny Cash’s car that he brought home “one piece at a time”.
So, is it a 1/13/19/39 bit machine?A bit like trying to pin down the model year of Johnny Cash’s car that he brought home “one piece at a time”.
Note that the Elliot had 39 bit data words and accumulator primarily because it was designed for floating point, with a 30 bit 2s complement mantissa and 9 bit exponent, although it could also do integer arithmetic in that size.
Back when the Mac II (6020) was new, I often made use of the 80 bit FPU's ability to do 64 bit integer arithmetic, but that didn't make it an 80 bit (or 64 bit) computer.
Note that the Elliot had 39 bit data words and accumulator primarily because it was designed for floating point, with a 30 bit 2s complement mantissa and 9 bit exponent, although it could also do integer arithmetic in that size.
It wasn't primarily a floating point machine. That would have been remarkable in an era where hardware limitations caused languages to have fixed point arithmetic types and operations. I can't quickly spot the instruction timing for floating point operations; if I see him on 30th June at TNMoC, I'll ask Peter Onion.
One thing about "bitness" is that it does not really matter in general.
Note that the Elliot had 39 bit data words and accumulator primarily because it was designed for floating point, with a 30 bit 2s complement mantissa and 9 bit exponent, although it could also do integer arithmetic in that size.
It wasn't primarily a floating point machine. That would have been remarkable in an era where hardware limitations caused languages to have fixed point arithmetic types and operations. I can't quickly spot the instruction timing for floating point operations; if I see him on 30th June at TNMoC, I'll ask Peter Onion.It could only execute a few hundred floating point instructions per second, but even its integer performance was like molasses.
Note that the Elliot had 39 bit data words and accumulator primarily because it was designed for floating point, with a 30 bit 2s complement mantissa and 9 bit exponent, although it could also do integer arithmetic in that size.
It wasn't primarily a floating point machine. That would have been remarkable in an era where hardware limitations caused languages to have fixed point arithmetic types and operations. I can't quickly spot the instruction timing for floating point operations; if I see him on 30th June at TNMoC, I'll ask Peter Onion.It could only execute a few hundred floating point instructions per second, but even its integer performance was like molasses.
Note that the Elliot had 39 bit data words and accumulator primarily because it was designed for floating point, with a 30 bit 2s complement mantissa and 9 bit exponent, although it could also do integer arithmetic in that size.
It wasn't primarily a floating point machine. That would have been remarkable in an era where hardware limitations caused languages to have fixed point arithmetic types and operations. I can't quickly spot the instruction timing for floating point operations; if I see him on 30th June at TNMoC, I'll ask Peter Onion.It could only execute a few hundred floating point instructions per second, but even its integer performance was like molasses.
It was a 2kIPS machine; they cycle time was 576µs.
Even so, they managed to play music on it. I may even have a cassette tape of that somewhere The "Fetch ALGOL" operation sounded like a broody hen
After reading the "Bootleg MCS-251, 32-Bit 8051" topic, I have to ask this age old question again because my opinion has changed.
....
So my current definition is:
If it can do a 32 bit computation in a single instruction, regardless of the clock cycle count, then its a 32 bit CPU. But its all a bit messy, and I'm sure there are exceptions to my definition.
So how do you folks define a 32 bit CPU?
One thing about "bitness" is that it does not really matter in general.
Where I get more controversial is I think the size of memory addresses and registers to hold them and address arithmetic (without any bank switching, segments, or page tables) is more important than the width of arithmetic on data. The 80386 and 68030 are not 80 bit just because they can do arithmetic on 80 bit quantities. They are 32 bit because that is the size of their memory addresses.
It is the 8bit ones that are annoying to run modern languages on as they can't properly address any reasonably useful amount of memory. They were meant to run hand coded assembly, yes you can run C on them, but the C code you run on them has to be written with the CPU limitations in mind. Pointers don't work properly anymore, math with large numbers is slow and loves to truncate bits, strings are a pain with memory etc..