Author Topic: Why faster MCU's always get's large pin packages!  (Read 10836 times)

0 Members and 2 Guests are viewing this topic.

Offline rsjsouza

  • Super Contributor
  • ***
  • Posts: 6072
  • Country: us
  • Eternally curious
    • Vbe - vídeo blog eletrônico
Re: Why faster MCU's always get's large pin packages!
« Reply #50 on: July 10, 2017, 05:08:51 pm »
Dave explored this subject a long time ago, but with FPGAs. He wanted to get a very large FPGA in a smaller package.

Short answer: there is no high scale market for such thing.

Long answer:
In my experience, the vast majority of applications of fast cores focus on the ability to perform multiple operations in "parallel" (true if multi core) or "fast sequenced" (if single core) while interacting with the real world in multiple ways - either via a plethora of serial interfaces or via very fast dedicated buses (USB2/3, Multi-lane SRIO, etc.) or external devices such as RAM. That, tied to the required additional pins power power and GND and the need to exhaust thermal energy fast enough from the device before it melts, increases the number of pins considerably. 

You could argue that you can use PowerPAD for thermal dissipation, reduce the number of peripherals and maybe even remove the external RAM controller. Thermally speaking this would offset the absence of pins on the package; however, how to decide which peripherals to remove? Would your application require only I2S (for audio) or fast SPI? What about the customer that would also need some I2C or a small 16-bit memory interface for parallel ADCs? Spin another device variant?

If you want to categorize the device as a "MCU", then it would be almost mandatory to add non-volatile memory inside the device and increase the cost sensibly (not to mention the higher thermal profile inside the package tends to reduce endurance sensibly). Running 1GHz from Flash is impossible, thus you would need a great amount of internal high speed RAM. If you want to remove the external high speed RAM interface, how much internal RAM would be enough, given it would have to accommodate data+code?

Therefore, to answer your question as to why there are no 1GHz MCUs: there is considerable risk and cost associated with the release of a family of devices and nobody will target a niche market (unless there are heaps of money to be made) or a hot market of yore (audio processing, as you provided as an example) where other dedicated solutions exist at a fraction of the cost of a general programmable device.

So yes, you are right: there are no "real" choice's, only expensive ""forced-upon-you" solutions. - the forces are beyond you, I, the manufacturers and are imposed by "the market". Obviously this can change anytime, provided the money follows it - one example is the Allwinner device mentioned above. If it is successful, other manufacturers will follow.
Vbe - vídeo blog eletrônico http://videos.vbeletronico.com

Oh, the "whys" of the datasheets... The information is there not to be an axiomatic truth, but instead each speck of data must be slowly inhaled while carefully performing a deep search inside oneself to find the true metaphysical sense...
 

Offline bson

  • Supporter
  • ****
  • Posts: 2474
  • Country: us
Re: Why faster MCU's always get's large pin packages!
« Reply #51 on: July 11, 2017, 09:49:48 am »
I could see lots of interesting applications for a small 1GHz uC... It wouldn't need much memory, maybe 512 bytes, certainly less than a typical L1 cache.  It could probably execute very close to 1G instructions per second out of a very wide NOR flash, where a line is wide enough to encompass the typical branch range.  This would also permit a very simple core without branch prediction or speculative execution, read/write queues, etc.  Just a basic Cortex-M core.  Such a uC could do a lot of interesting tasks in hardware, for example a reasonably accurate phase detector or a software PLL, or many other handy little glue tasks - like timing generation with very precisely programmable skews.  It could also have very low interrupt latency.  If it has a fast ADC built in it can do even more.  I see no reason why not, but of course fast RAM for it will take up quite a bit of die space, but it wouldn't need much.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4559
  • Country: nz
Re: Why faster MCU's always get's large pin packages!
« Reply #52 on: July 11, 2017, 12:55:30 pm »
I could see lots of interesting applications for a small 1GHz uC... It wouldn't need much memory, maybe 512 bytes, certainly less than a typical L1 cache.  It could probably execute very close to 1G instructions per second out of a very wide NOR flash, where a line is wide enough to encompass the typical branch range.  This would also permit a very simple core without branch prediction or speculative execution, read/write queues, etc.  Just a basic Cortex-M core.  Such a uC could do a lot of interesting tasks in hardware, for example a reasonably accurate phase detector or a software PLL, or many other handy little glue tasks - like timing generation with very precisely programmable skews.  It could also have very low interrupt latency.  If it has a fast ADC built in it can do even more.  I see no reason why not, but of course fast RAM for it will take up quite a bit of die space, but it wouldn't need much.

For such, Altera has 3mm*3mm MAX10 FPGAs with 2k LEs and plenty of RAM. You can implement a NiosII and accelerate key logic with FPGA. It also has a few hard multiplier blocks and some of its RAM can be used as soft multipliers as well.
Sounds to be a cheap choice for me, considering the price tag (2x~10x common Cortex M0 MCUs) and the small side, as long as you have a way to deal with 0.4mm pitch BGA, it pretty much crashes MCUs in terms of digital performance. BTW, it comes with flash as well, so no EPCS needed.

Five or six euros for the 2000 PE version. Not bad.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf