Author Topic: MCU with FPGA vs. SoC FPGA  (Read 30642 times)

0 Members and 1 Guest are viewing this topic.

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9566
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #125 on: July 15, 2023, 04:05:19 pm »
Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
So you think DEC + data general + most other mini-computer makers added up to a small business? Just how big is your threshold for big?

How long did they use any of the bit-slice designs? 

I don't want to argue with you.  Believe what you wish.
Early 70s to early 80s. If you look at the number of generations things like TI's bit slice chips went through, they were obviously selling, with customers demanding better follow ons.

None of that is in contradiction of what I said.  The volumes of bit-slice products were never large, and any given product with potential of high volume was redesigned with custom chips, or even general purpose CPUs as they ramped up in speed. 

BTW, the AM2900 family was not released until 1975, so "early 70s" is a bit of a stretch.

Bit-slice was always a niche, able to obtain high performance for very high power consumption, and high cost.  It had inherent speed limitations that doomed it from continued improvements.  The entire history of electronics has been as much about economics as it has been technology.  Now, with the huge market for mobile products, it's as much about power.
You seem very confused. The subject was bit-slice chips and now you talk about the Am2900 family. The Am2900 series appeared quite late in the day. By their launch numerous minicomputers were already using things like the TI TTL or Motorola ECL bit slice families. The Intel 3000 family was from 1973, but I'm not sure that ever got into any high volume minicomputers. All the places I saw it were niche things, like defence projects. Various companies, especially ones with good internal silicon processes (e.g. DEC) or close ties with a silicon vendor who regularly produced custom parts for them, had in house bit slice chip sets that you won't find too much information about now. DEC's first move in single chip processors was the Western Digital/DEC LSI11 in 1976, but it struggled for several years. You saw a lot of them around in 81 or 82, but very few before that. Of course bit slice didn't last for decades. It was big for just one decade. However, one decade is a long time in electronics. In the early 70s you'd find a bit slice based, vector scanning. arcade machine in every pub, but you'd probably never see a microcomputer of any kind. By the end of the 70s things like the Apple ][ were all over the place. By the end of the 80s the Apple ][ was also dead.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9940
  • Country: us
Re: MCU with FPGA vs. SoC FPGA
« Reply #126 on: July 15, 2023, 04:54:20 pm »
I like the concept of microcoding processors - perhaps more than actually writing the code.  The AMD 2900 was terrific for this and at least one early adopter used them to create disk drive controllers in the early days of hard drives for PCs.  The SCSI interface comes to mind.

What might be fun is to design FPGA components to match the original 2900 series devices and then create a project using the components and a chunk of BlockRAM to hold the microcode.

The LC3 project is intended to use microcoding although I built it using a conventional FPGA style FSM.  A microcode worksheet is provided.   At least one version is missing a signal.  The newer LC3b with byte addressing may be a more popular project.

https://people.cs.georgetown.edu/~squier/Teaching/HardwareFundamentals/LC3-trunk/docs/LC3-uArch-PPappendC.pdf
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #127 on: July 15, 2023, 05:32:58 pm »
The LC3 project is intended to use microcoding

When you design a CPU where micro-instructions are normally stored in permanent memory of the CPU itself, you are not correctly talking about RISC.

That's why LC3 is deprecated in modern CPU text books: because it should be intended to show how to implement a pure RISC CPU (like MIPS or RISC-V), while trying to implement a RISC-ish CPU with a microcode approach means, not only wastes BRAM in fpga, but it's nothing but making a non-educational mess.

LC3 made (not the past verb) sense in university courses. As well as MIC1.

(deprecated) LC3 ----> registers-load/store CPU, RISC approach
(still used) MIC1 ----> stack-CPU, micro/macro code approach
(deprecated) MIPS R2K ----> registers-load/store CPU, pure-RISC approach
(currently in use) RISC-V ----> registers-load/store CPU, pure-RISC approach

both MIPS R2K and RISC-V also offer the possibility to study the difference between "multi-cycle" and "pipelined", still with the possibility to run "serious" program (like a scheduler, a minimal kernel) at the end of the HDL implementation!

LC3 ... is a toy, the purpose is only to implement something simple that only takes students 3 weeks of university laboratory to complete to finally run "hEllo world" at the end.

No one would use either LC3 or MIC1 for anything other than a university exercise, as the ISA of both LC3 and MIC1 is too limited for serious useful task, so why do you continuosly bother people here with your "LC3" ?

all you do is repeat the same things and the same links like an old turntable that has jammed, and you do nothing, absolutely nothing, to possibly understand the ISA or to make a critical analysis of it, even when people here have already pointed this out to you.

why insist on sponsoring it in every single discussion, and why don't you correctly mention RISC-V, instead?
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 20770
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: MCU with FPGA vs. SoC FPGA
« Reply #128 on: July 15, 2023, 05:44:23 pm »
Bit slice was always, in essence, a research tool.  While you could build systems with it, IC technology was advancing so fast, the bit slice would always be overrun quickly.  I guess there were a few designs that never made it to high volume production, where the bit slice was the right choice.

I didn't realise these graphics terminals, arcade games (and computers) were research tools:
PDP-11/23, PDP-11/34, and PDP-11/44 floating-point option, DEC VAX 11/730
Tektronix 4052, Pixar Image Computer, Ferranti Argus 700,
Atari's vector graphics arcade machines
and many others
https://en.m.wikipedia.org/wiki/AMD_Am2900#Computers_made_with_Am2900-family_chips
The 11/730 was the biggest selling mini-computer of all time, and it was awful. I used to use them. I was shocked when I found how well they sold.

The alternatives weren't wonderful.
A modestly configured 11/730 was 100k pounds. There were a lot of things you could buy for that much which performed so much better. We used them because of software and hardware issues that locked us into "needing a VAX". VMS was the problem. It was a dog on most VAX machines, because of its weird file system. This required a super complex disc control to recover some of the performance its design lost you.

The ecosystem has always been more important than the processor.

I recognised that while an undergrad, and mentioned to an interviewer on the milkround. They were impressed that I looked beyond whether a 8080/6800/1802 was best :)

Beyond that, I'll mention the Intel blue box, the IBM 360 etc, the x86....

One of the reasons the XMOS devices have impressed me is their hardware+software+toolset. The processor is the least important aspect, and I believe they will replace their processor ISA with Risc-V.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9566
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #129 on: July 15, 2023, 05:54:34 pm »
A modestly configured 11/730 was 100k pounds. There were a lot of things you could buy for that much which performed so much better. We used them because of software and hardware issues that locked us into "needing a VAX". VMS was the problem. It was a dog on most VAX machines, because of its weird file system. This required a super complex disc control to recover some of the performance its design lost you.
The ecosystem has always been more important than the processor.
In our case it wasn't the general ecosystem. There were plenty of options which offered a decent ecosystem. The VAXes were there to control systems which only had interfaces to VAXes. Our only option was how big a VAX to choose, and the system vendor just warned us not to choose the very smallest VAXes as they were too slow to keep up with even controlling something.

I recognised that while an undergrad, and mentioned to an interviewer on the milkround. They were impressed that I looked beyond whether a 8080/6800/1802 was best :)
They would have been even more impressed if you'd answered "I'd use the one with the highest chance of actually being available when we need them". We must have had 30 or more EVMs for the various MPUs in the 70s, and most of them didn't survive long enough to be available when a product was finally developed. :)
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9566
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #130 on: July 15, 2023, 06:21:13 pm »
I did a bit of reading and found the VAX-11/730 and VAX-11/725 (same CPU) were the only bit-slice renditions of the VAX line.
As far as I know all the earlier VAXes were bit slice based designs. The 11/725 and 11/730 were probably the only ones using the Am2900 family. The original 11/780 must have been started before the Am2900 family was launched. The 11/780 use TI 74 family parts, as did many of the PDP11s. Starting with the TI 74181 in 1970, and using more of the 74 family as it was fleshed out, and improved with S and AS parts. Eventually DEC expanded the low end with the micro-VAX single chip processor, and the high end with the 8000 series based on ECL cell arrays.



« Last Edit: July 15, 2023, 06:26:51 pm by coppice »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9940
  • Country: us
Re: MCU with FPGA vs. SoC FPGA
« Reply #131 on: July 15, 2023, 06:40:29 pm »
The LC3 project is intended to use microcoding

When you design a CPU where micro-instructions are normally stored in permanent memory of the CPU itself, you are not correctly talking about RISC.

If a CPU is created with specific features for the application, RISC itself may not be a goal.
Quote

That's why LC3 is deprecated in modern CPU text books: because it should be intended to show how to implement a pure RISC CPU (like MIPS or RISC-V), while trying to implement a RISC-ish CPU with a microcode approach means, not only wastes BRAM in fpga, but it's nothing but making a non-educational mess.

None of which is important for domain specific CPUs.  There is no point in implementing an ISA that swamps the needs of the project.
Quote

LC3 ... is a toy, the purpose is only to implement something simple that only takes students 3 weeks of university laboratory to complete to finally run "hEllo world" at the end.

No one would use either LC3 or MIC1 for anything other than a university exercise, as the ISA of both LC3 and MIC1 is too limited for serious useful task, so why do you continuosly bother people here with your "LC3" ?
Because it is a simple fully documented CPU capable of handling many tasks.  The key being simple.  An undergrad student can easily implement it in a couple of days of diligent effort.  A week at the absolute outside including the time to learn VHDL.

For a first project it is kind of nice.  Multicore ARM processors will need a separate course although I have a book that goes into it.  MIPS is the subject of another book by the same authors.  Pipelining is fully discussed although code isn't provided for that case.  Code is provided for the non-pipelined case.

I can't get my head around trying to implement a pipelined processor (RISC or ARM) as a first project.
Quote

why insist on sponsoring it in every single discussion, and why don't you correctly mention RISC-V, instead?
It's a fully documented CPU with adequate features for many purposes and odd-ball peripherals can be easily added.

I don't mention RISC-V because I don't know anything about it.  I still wouldn't recommend it to the newcomer still struggling to grasp FSMs.
 

Offline gnuarm

  • Super Contributor
  • ***
  • Posts: 2247
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #132 on: July 15, 2023, 07:53:36 pm »
Yep, bit-slice was used in many systems, for a very short time, virtually all of them low volume, very high priced.  Calling that "big" is a misuse of the word "big". 
So you think DEC + data general + most other mini-computer makers added up to a small business? Just how big is your threshold for big?

How long did they use any of the bit-slice designs? 

I don't want to argue with you.  Believe what you wish.
Early 70s to early 80s. If you look at the number of generations things like TI's bit slice chips went through, they were obviously selling, with customers demanding better follow ons.

None of that is in contradiction of what I said.  The volumes of bit-slice products were never large, and any given product with potential of high volume was redesigned with custom chips, or even general purpose CPUs as they ramped up in speed. 

BTW, the AM2900 family was not released until 1975, so "early 70s" is a bit of a stretch.

Bit-slice was always a niche, able to obtain high performance for very high power consumption, and high cost.  It had inherent speed limitations that doomed it from continued improvements.  The entire history of electronics has been as much about economics as it has been technology.  Now, with the huge market for mobile products, it's as much about power.
You seem very confused. The subject was bit-slice chips and now you talk about the Am2900 family. The Am2900 series appeared quite late in the day. By their launch numerous minicomputers were already using things like the TI TTL or Motorola ECL bit slice families. The Intel 3000 family was from 1973, but I'm not sure that ever got into any high volume minicomputers. All the places I saw it were niche things, like defence projects. Various companies, especially ones with good internal silicon processes (e.g. DEC) or close ties with a silicon vendor who regularly produced custom parts for them, had in house bit slice chip sets that you won't find too much information about now. DEC's first move in single chip processors was the Western Digital/DEC LSI11 in 1976, but it struggled for several years. You saw a lot of them around in 81 or 82, but very few before that. Of course bit slice didn't last for decades. It was big for just one decade. However, one decade is a long time in electronics. In the early 70s you'd find a bit slice based, vector scanning. arcade machine in every pub, but you'd probably never see a microcomputer of any kind. By the end of the 70s things like the Apple ][ were all over the place. By the end of the 80s the Apple ][ was also dead.

You can't seem to follow my train of thought.  I said the utility of bit-slice was never for mainstream products, other than for short times or for low volume, high priced devices.  All the examples you've posted supports this.  Then you talk about one specific product, the Apple II, being obsolete as if to indicate this is a universal principle. 

It is universal for any one product.  Of course products become obsolete.  But bit-slice is a design technology, a paradigm.  It has specific limitations, which allowed it to become obsolete in just a few years.  In the time it was used, it appeared in a few products which quickly became obsolete as newer technology made bit-slice too expensive, too large, too power hungry and too complex. 

I don't know what you are trying to say.  I've never said bit-slice was not used.  What I am saying is very clear, if you pay attention to what I actually say, rather than what you seem to want to hear. 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline gnuarm

  • Super Contributor
  • ***
  • Posts: 2247
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #133 on: July 15, 2023, 08:03:30 pm »
I like the concept of microcoding processors - perhaps more than actually writing the code.  The AMD 2900 was terrific for this and at least one early adopter used them to create disk drive controllers in the early days of hard drives for PCs.  The SCSI interface comes to mind.

What might be fun is to design FPGA components to match the original 2900 series devices and then create a project using the components and a chunk of BlockRAM to hold the microcode.

The LC3 project is intended to use microcoding although I built it using a conventional FPGA style FSM.  A microcode worksheet is provided.   At least one version is missing a signal.  The newer LC3b with byte addressing may be a more popular project.

https://people.cs.georgetown.edu/~squier/Teaching/HardwareFundamentals/LC3-trunk/docs/LC3-uArch-PPappendC.pdf

While we are free to do anything we want in a hobby project, microcoding is mostly obsolete other than in custom silicon CPU designs.  The large word width consumes memory rapidly making it a questionable choice for use in FPGAs.  But, it can provide speed advantages if combined with a clean architecture. 

I believe Motorola originally wanted to use microcoding in the 68000, but the layout of the program store was longer than the chip!  So they broke it down a bit to create nanocoding where the microcode would invoke routines in the nanocode, exploiting the repetitive nature of most code.  This is why Forth code tends to be small.  It makes subroutines easier, and simpler to use, facilitating many, small routines that are used in many places in the code.  The resulting hierarchy has less repetition, so smaller, often significantly so.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline gnuarm

  • Super Contributor
  • ***
  • Posts: 2247
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #134 on: July 15, 2023, 08:17:47 pm »
The LC3 project is intended to use microcoding

When you design a CPU where micro-instructions are normally stored in permanent memory of the CPU itself, you are not correctly talking about RISC.

That's why LC3 is deprecated in modern CPU text books: because it should be intended to show how to implement a pure RISC CPU (like MIPS or RISC-V), while trying to implement a RISC-ish CPU with a microcode approach means, not only wastes BRAM in fpga, but it's nothing but making a non-educational mess.

LC3 made (not the past verb) sense in university courses. As well as MIC1.

(deprecated) LC3 ----> registers-load/store CPU, RISC approach
(still used) MIC1 ----> stack-CPU, micro/macro code approach
(deprecated) MIPS R2K ----> registers-load/store CPU, pure-RISC approach
(currently in use) RISC-V ----> registers-load/store CPU, pure-RISC approach

both MIPS R2K and RISC-V also offer the possibility to study the difference between "multi-cycle" and "pipelined", still with the possibility to run "serious" program (like a scheduler, a minimal kernel) at the end of the HDL implementation!

LC3 ... is a toy, the purpose is only to implement something simple that only takes students 3 weeks of university laboratory to complete to finally run "hEllo world" at the end.

No one would use either LC3 or MIC1 for anything other than a university exercise, as the ISA of both LC3 and MIC1 is too limited for serious useful task, so why do you continuosly bother people here with your "LC3" ?

all you do is repeat the same things and the same links like an old turntable that has jammed, and you do nothing, absolutely nothing, to possibly understand the ISA or to make a critical analysis of it, even when people here have already pointed this out to you.

why insist on sponsoring it in every single discussion, and why don't you correctly mention RISC-V, instead?

You missed the fact that by having only the memory delay, a microcoded architecture can be faster than a logic based design.  I believe it was Federico Faggin who, after designing the Z8000, said he would never do another CPU design without microcode.  The logic design was just too labor intensive.  I guess CAD was not so big back then.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15444
  • Country: fr
Re: MCU with FPGA vs. SoC FPGA
« Reply #135 on: July 15, 2023, 08:22:47 pm »
Of course the tools were not what they are now.
No one in their right mind would use anything else than a HDL for designing a CPU core these days, unless as a hobby project/a silly challenge.

But I've witnessed the logic part of ASICs designed with logic gates rather than Verilog or VHDL as late as the early 2000's at some companies. Probably because the engineers didn't master any HDL language. And this was simpler stuff than CPUs.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9940
  • Country: us
Re: MCU with FPGA vs. SoC FPGA
« Reply #136 on: July 15, 2023, 08:43:58 pm »

While we are free to do anything we want in a hobby project, microcoding is mostly obsolete other than in custom silicon CPU designs.  The large word width consumes memory rapidly making it a questionable choice for use in FPGAs.  But, it can provide speed advantages if combined with a clean architecture. 


One-hot encoding of the state variables of an FSM can be very wide but we still use one-hot to avoid decoding the state.  But there may be just two variables so it isn't really that much of a burden unless certain flops need to be duplicated by the router.
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #137 on: July 16, 2023, 08:08:55 am »
I believe Motorola originally wanted to use microcoding in the 68000, but the layout of the program store was longer than the chip!  So they broke it down a bit to create nanocoding where the microcode would invoke routines in the nanocode, exploiting the repetitive nature of most code.

Ironically they had to do this to shorten the time to market of 68020.
I was there when it happened, I still have their brochure  :o :o :o
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #138 on: July 16, 2023, 08:30:01 am »
Because it is a simple fully documented CPU capable of handling many tasks
...
I don't mention RISC-V because I don't know anything about it.  I still wouldn't recommend it to the newcomer still struggling to grasp FSMs.

so, you know nothing but wouldn't recommend RISC-V to the newcomer still struggling to grasp FSMs.
And let's suggest the wrong approach ever.
Makes sense.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #139 on: July 16, 2023, 09:11:13 am »
There is no point in implementing an ISA that swamps the needs of the project.

Projects? LC3 comes with a just minimally decent C compiler, so ... even on the toolchain side that thing is totally useless.

LC3 has been around for a while (10 years?) and had time to become deprecated in every university course before someone tried to extend its ISA: you who are blabbering about it have you ever done it? obviously no, people just played some simle demos to fill their mouth and do nothing but repeat their usual pdf that no one cares about ...

RISC-V instead was born with the precise intention of giving the possibility to anyone to be able not only to study the ISA, study its possible implementation { multicycle, pipeline } but to extend it without any legal recourse, which was not possible with MIPS


And here we are to better understand what really bothers me:

In 2005, I received a legal letter from SONY, for posting on my website an attempt to decode the BIOS of their Playstation1 (MIPS R3000/LE), and a few months later a second legal letter from MIPS.inc for modifying the MIPS ISA- R2K as "MIPS++", as in my website I suggested to implement the whole PS-ONE in FPGA, but with more ram, and a couple of extensions to the ISA to make it like what today is known as "MIPS32".

Briefly, the first letter said something like "blablabla, the BIOS is the intellectual property of SONY, you are only authorized to use the console to run genuine gaming software, you are not authorized to post any reverse engineering as it may be used to hack third party software affiliated with SONY, blablabla" (I think they meant affiliated software-houses) ; while the second letter said that you can do whatever you want, but it shouldn't be called "MIPS" or "MIPS-compliant" without contacting their legal department and paying royalties.

With RISC-V there is no "cease and desist or pay a salt fine", and nobody, ___ nobody ___, will ever get any legal letter for publishing an extension of the ISA, maybe that', adding x86 style SIMD.

Perhaps a small step forward for opensource/hardware, how many people will add something really useful? But it's a big potentially insanely great step forward for freedom: so, don't throw it in the corner because you assume "LC3" is easier!
« Last Edit: July 16, 2023, 09:13:15 am by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline gnuarm

  • Super Contributor
  • ***
  • Posts: 2247
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #140 on: July 16, 2023, 10:53:10 am »
I believe Motorola originally wanted to use microcoding in the 68000, but the layout of the program store was longer than the chip!  So they broke it down a bit to create nanocoding where the microcode would invoke routines in the nanocode, exploiting the repetitive nature of most code.

Ironically they had to do this to shorten the time to market of 68020.
I was there when it happened, I still have their brochure  :o :o :o

I don't know why you say "ironically".  Once they developed microcode combined with nanocode, I would expect them to continue to use that to minimize the die area.  Die area is a very important part of chip cost and so profit.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #141 on: July 16, 2023, 03:47:07 pm »
I don't know why you say "ironically".  Once they developed microcode combined with nanocode, I would expect them to continue to use that to minimize the die area.  Die area is a very important part of chip cost and so profit.

68020: 1984, ~190.000 transistors
68030: 1986, ~273.000 transistors

The 68030 is essentially a 68020 with a memory management unit and instruction and data caches of 256 bytes each, and the main idea presented in my MC68020 brochure shows a traditional microcode memory divided into two parts
  • microcode part, controls the microaddress sequencer
  • nanocode part, controls the execution unit
The tricky bit with 020 was that the nanocode part is stored in a nanoROM which has an address decoder which doesn’t fully decode the address. That is, different microcode addresses will result in the same row being addressed in the nanocode ROM which is quite useful when different microcode instructions want to send identical control signals to the execution unit, as this allows precious silicon area to be saved if there is a lot of redundancy in the nanocode ROM before the optimization is performed.

But they didn't care too much about redundancy in the nanocode ROM, and this was done to save development time, to recude the time-to-market of the chip to contrast competitors!
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #142 on: July 16, 2023, 04:29:56 pm »
e.g. 68000 has 544 x 17-bit "microcode-words" which dispaches to 366 x 68-bit "nanocode-words", and doesn't support the 2bits for scale so is set to bx00 as this addressing possibility was left open during MACSS/68k design, and got then actually suported from 68020 up, but on 68020 the redundancy of this extra EA mode was not optimized in the nanocode ROM, and it was finally fixed in 68030.

The main disadvantage of a microcoded circuit lies mainly in its generality. When you don't find repeating patterns and you optimize them you're wasting silicon, expecially because ISA 68k is orthogonal, that EA extension is used by many CPU instructions and if it's not optimized then it's repeated microcode at the expense of more transistors and then make a bigger circuit because it requires a bigger nanoROM.

They didn't have enough time to optimize the nanoROM in 68020, they did it some years later in 68030, and it was vastly better.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9940
  • Country: us
Re: MCU with FPGA vs. SoC FPGA
« Reply #143 on: July 16, 2023, 05:20:35 pm »
In terms of RISC-V, is there a hardware diagram similar to that of LC3 where each major block is shown along with the inputs, outputs and control signals?

Is there a state diagram for a prototypical design variant.  Something that can be reduced to HDL with little effort?

Is there sufficient documentation that a first semester student can get the device to work?

I haven't researched RISC-V enough to know the answers to any of those questions but I also haven't stumbled over documentation at the level discussed above.

Copy and paste from sombody else's design doesn't count.  Otherwise why not use the CPU designs provided by the FPGA vendors?  They're pretty well understood.  How about the cores at OpenCores.org?  The T80 core works well for CP/M and various arcade games based on the original Z80 - like PacMan.  It's pretty flexible in terms of adding peripherals.

There are a bunch of RISC-V boards at Amazon.  Should be easy to get started.  Some will run Linux...

« Last Edit: July 16, 2023, 05:22:24 pm by rstofer »
 

Offline gnuarm

  • Super Contributor
  • ***
  • Posts: 2247
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #144 on: July 16, 2023, 05:39:14 pm »
I don't know why you say "ironically".  Once they developed microcode combined with nanocode, I would expect them to continue to use that to minimize the die area.  Die area is a very important part of chip cost and so profit.

68020: 1984, ~190.000 transistors
68030: 1986, ~273.000 transistors

The 68030 is essentially a 68020 with a memory management unit and instruction and data caches of 256 bytes each, and the main idea presented in my MC68020 brochure shows a traditional microcode memory divided into two parts
  • microcode part, controls the microaddress sequencer
  • nanocode part, controls the execution unit
The tricky bit with 020 was that the nanocode part is stored in a nanoROM which has an address decoder which doesn’t fully decode the address. That is, different microcode addresses will result in the same row being addressed in the nanocode ROM which is quite useful when different microcode instructions want to send identical control signals to the execution unit, as this allows precious silicon area to be saved if there is a lot of redundancy in the nanocode ROM before the optimization is performed.

But they didn't care too much about redundancy in the nanocode ROM, and this was done to save development time, to recude the time-to-market of the chip to contrast competitors!

I'm not following your logic.  Which of the 68000, 68010, 68020 and 68030 did not use nanocoding?  Are you trying to say they continued to use nanocoding in the '20 and '30 to save development time? 
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9566
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #145 on: July 16, 2023, 07:01:23 pm »
68020: 1984, ~190.000 transistors
68030: 1986, ~273.000 transistors
68040: 1990 ~1,200,000 transistors.

Quite a jump with a very different implementation. The first pass of the 68040 was brilliant, but mismanagement meant they struggled to crank it to more than 40MHz, while the 80486 went 25MHz to 33MHz to 66MHz to 100MHz, and greatly outperformed it.
 
The following users thanked this post: DiTBho

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #146 on: July 16, 2023, 07:35:28 pm »
I'm not following your logic

I only told you that, when not optimized, microcoding can contribute to wasting area, and I gave you a real example as I have the full documentation about it.

Which of the 68000, 68010, 68020 and 68030 did not use nanocoding?  Are you trying to say they continued to use nanocoding in the '20 and '30 to save development time?

I think I clearly pointed to you as an example the 68020 where non-optimization of the nanoROM was accepted, at the expense of having wasted area of silicon, to reduce "time-to-market" because Motorola guys had the competitors breathing down their necks.

That's all! and it means only one thing: although it happened only with the 68020 and with no other CPU of the 68k family, it's not theoretical, it can happen in a Company, and the reason is "strategic business".
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #147 on: July 16, 2023, 07:46:20 pm »
RISC-V

In terms of feasibility, several members of this forum managed to implement their version of RISC-V in HDL.

The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 4247
  • Country: gb
Re: MCU with FPGA vs. SoC FPGA
« Reply #148 on: July 16, 2023, 08:06:52 pm »
68040: 1990 ~1,200,000 transistors.

Quite a jump with a very different implementation. The first pass of the 68040 was brilliant, but mismanagement meant they struggled to crank it to more than 40MHz, while the 80486 went 25MHz to 33MHz to 66MHz to 100MHz, and greatly outperformed it.

I never worked with 68040, the closest thing to ever touching a 040 is when I replaced the 68LC040 CPU on my Apple LC475 with a 68060 via a Smartsocket + complete ROM-hacking as the 060 misses some instructions that need to be emulated in software, and, even when it never throws an exception for unimplemented opcode, I found the floating point unit of a 68060 FULL (with MMU and FPU) is of several orders of magnitude slower than the FPU on a Pentium1!

On the contrary, however, the integer unit of a 68060@100Mhz is 3x faster than a Pentium1@100Mhz.

That's why on the CyberStorm060 (CPU accelerator for Amiga2000, 3000, 4000) was used to massively process "fixedpoint" instead of "floatingpoint", and why my customers' VME industrial embroidery machine controllers, based on a pair of 060 CPUs @100Mhz in SMP configuration, use a pair of SHARK DSP units, attached to a cross bar matrix, for floating point calculations.

I vaguely know that the problem was with the implementation - Intel and AMD were heavily using pipelined FPUs for their 5th generation of x86 32bit CPUs, while Motorola must have reused a non-pipelined floating point unit for the 060 ...

... but I have no idea of the reason  :-//
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline gnuarm

  • Super Contributor
  • ***
  • Posts: 2247
  • Country: pr
Re: MCU with FPGA vs. SoC FPGA
« Reply #149 on: July 16, 2023, 08:23:08 pm »
I'm not following your logic

I only told you that, when not optimized, microcoding can contribute to wasting area, and I gave you a real example as I have the full documentation about it.

Which of the 68000, 68010, 68020 and 68030 did not use nanocoding?  Are you trying to say they continued to use nanocoding in the '20 and '30 to save development time?

I think I clearly pointed to you as an example the 68020 where non-optimization of the nanoROM was accepted, at the expense of having wasted area of silicon, to reduce "time-to-market" because Motorola guys had the competitors breathing down their necks.

That's all! and it means only one thing: although it happened only with the 68020 and with no other CPU of the 68k family, it's not theoretical, it can happen in a Company, and the reason is "strategic business".

OK, thanks for the update.
Rick C.  --  Puerto Rico is not a country... It's part of the USA
  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf