Author Topic: When CPU's are made is each one slightly different?  (Read 8402 times)

0 Members and 2 Guests are viewing this topic.

Offline BeaminTopic starter

  • Super Contributor
  • ***
  • Posts: 1567
  • Country: us
  • If you think my Boobs are big you should see my ba
When CPU's are made is each one slightly different?
« on: September 05, 2018, 09:00:56 am »
Since they are up to billions of transistors that are made from a small amount of atoms, there must be some that don't come out perfectly from impurities and other things. it seems as if to get all 1 billion right they would have to reject a huge number of dies. So do they build error correction into each chip where if a gate is bad in memory the chip can use a different one? Say they know that 0.000001% will be bad but need a minimum of 1,000,000 gates could they just make each memory cell with more gates and error correction then only reject the really defective chips?

Or maybe they can get all 1 billion transistor correct almost every time and that's just amazing. But how much cheaper/faster could they make chips if they could use error correction and afford to have less then perfect chips? Also at what point did error correction become necessary? The 8502 was laid out by hand so I imagine that needed 100% to work but that chip had orders of magnitude bigger features on it.

Isn't this how hard drives are made; they just expect so many bad sectors and this is sorted out by oversizing the drive and optimizing it down to spec during formatting and each herd drive learns its own platter? Would the same apply to SSD's as the gates become unusable?
Max characters: 300; characters remaining: 191
Images in your signature must be no greater than 500x25 pixels
 

Offline Halcyon

  • Global Moderator
  • *****
  • Posts: 5880
  • Country: au
Re: When CPU's are made is each one slightly different?
« Reply #1 on: September 05, 2018, 09:26:49 am »
Isn't this how hard drives are made; they just expect so many bad sectors and this is sorted out by oversizing the drive and optimizing it down to spec during formatting and each herd drive learns its own platter? Would the same apply to SSD's as the gates become unusable?

Kind of. Back in the day of MFM hard disk drives, manufacturers used to actually test and list bad sectors on a list stuck to the top of the drive itself. These were also mapped during low level formatting.

Low level formatting doesn't really exist any more (at a consumer software level), I mean it does during hard disk manufacturing, but if you do manage to completely wipe a modern spinning rust drive (for example, by using a deguasser), you'll render that drive completely unusable.

SSDs are also usually made with extra capacity, not seen by the host system. Internal wear levelling and error correction in the drive firmware itself will dynamically remap blocks which have become bad into this spare area.

It's all basically designed to increase the serviceable life of the drive. These damaged areas (sometimes entire tracks) can be interesting from a digital forensics point of view, as once an area is marked bad and remapped by the controller, it's not longer accessible without special hardware. That means even if you try to overwrite that sector, you are actually overwriting the good (remapped) sector in the spare area, leaving whatever data behind in the bad area.
 

Offline filssavi

  • Frequent Contributor
  • **
  • Posts: 433
Re: When CPU's are made is each one slightly different?
« Reply #2 on: September 05, 2018, 09:52:56 am »
What you say is already done extensively by the various intel/and/etc.

Of course there a full test such as hard drives is not feasible since the structure is far too complex and a full test would take far to much time. Test and characterisation is however done at a higher level.

For example let’s take intel lineup for familiarity, it’s well known that low to mid end Xeon, i9 and I7 use the same silicon die. They all start out as Xeon, the ECC memory and busses for dual cpu support are tested, if any of the two don’t work the die is downgraded to I9, now the cores are tested, if some of them aren’t working correctly the die is downgraded to i7 etc

Also another thing to note is that while we think to them as such mosfets are not digital switches, they are analog transistors, so it is quite unlikely that one isn’t switching at all, a far more likely defect is a transistor that cant switch at the maximum designed frequency.

That is why in intel lineup there are dozens of SKU’s with marginally different frequency, so that defective xeons aren’t thrown away, just sold as low end I7’s for less profit

EDIT: also error correction in general adds a lots of overhead so in order to add ecc logic to the whole design you might need to add 20 to 50% more transistors to the whole CPU, that could otherwise be put to much better use.
« Last Edit: September 05, 2018, 09:56:47 am by filssavi »
 

Offline BeaminTopic starter

  • Super Contributor
  • ***
  • Posts: 1567
  • Country: us
  • If you think my Boobs are big you should see my ba
Re: When CPU's are made is each one slightly different?
« Reply #3 on: September 05, 2018, 10:05:51 am »
Isn't this how hard drives are made; they just expect so many bad sectors and this is sorted out by oversizing the drive and optimizing it down to spec during formatting and each herd drive learns its own platter? Would the same apply to SSD's as the gates become unusable?

Kind of. Back in the day of MFM hard disk drives, manufacturers used to actually test and list bad sectors on a list stuck to the top of the drive itself. These were also mapped during low level formatting.

Low level formatting doesn't really exist any more (at a consumer software level), I mean it does during hard disk manufacturing, but if you do manage to completely wipe a modern spinning rust drive (for example, by using a deguasser), you'll render that drive completely unusable.

SSDs are also usually made with extra capacity, not seen by the host system. Internal wear levelling and error correction in the drive firmware itself will dynamically remap blocks which have become bad into this spare area.

It's all basically designed to increase the serviceable life of the drive. These damaged areas (sometimes entire tracks) can be interesting from a digital forensics point of view, as once an area is marked bad and remapped by the controller, it's not longer accessible without special hardware. That means even if you try to overwrite that sector, you are actually overwriting the good (remapped) sector in the spare area, leaving whatever data behind in the bad area.


In the days off das I remember stuff was never deleted but rather it put a ? in front of the filename and wrote over it when it felt like it. What does it do today? Is there an option in windows to hard delete things? I remember seeing an option to "shred" a file but I think that may have been an option with some antivirus software. Or is "hard deleting" only possible with special software and effort on part of the user?
Max characters: 300; characters remaining: 191
Images in your signature must be no greater than 500x25 pixels
 

Offline Halcyon

  • Global Moderator
  • *****
  • Posts: 5880
  • Country: au
Re: When CPU's are made is each one slightly different?
« Reply #4 on: September 05, 2018, 10:23:47 am »
In the days off das I remember stuff was never deleted but rather it put a ? in front of the filename and wrote over it when it felt like it. What does it do today? Is there an option in windows to hard delete things? I remember seeing an option to "shred" a file but I think that may have been an option with some antivirus software. Or is "hard deleting" only possible with special software and effort on part of the user?

That was specific to the FAT file system. To mark a file as deleted, it would replace the start of the file with a hexadecimal value of 0xE5. This meant that the first letter of the file name was lost (and why running UNDELETE.EXE required you to input the starting letter when recovering a file). All undelete did was look for those files starting with 0xE5 and provided they haven't been overwritten, it would recover them (i.e. replacing E5 with something else).

"Shredding" files is just overwriting all the clusters taken up by that file with other data (usually 1's, 0's or random garbage). That renders the file completely obliterated and unable to be recovered.

Today's modern file systems aren't much different, most just remove the entry from the allocation table which marks those clusters free to be overwritten. Most operating systems these days come with an option to totally erase the file (as per above), but a normal "delete" command usually doesn't invoke this mechanism. With whole disk encryption, it's largely not too much of an issue any more for the average user. Erasing a large file takes time, deleting a file entry doesn't.

Gees, you're taking me back to the good old days :-)
« Last Edit: September 05, 2018, 10:34:13 am by Halcyon »
 
The following users thanked this post: tooki

Offline Berni

  • Super Contributor
  • ***
  • Posts: 5022
  • Country: si
Re: When CPU's are made is each one slightly different?
« Reply #5 on: September 05, 2018, 10:29:27 am »
Yep they do that.

AMD used to make 3 core CPUs back when multicore became mainstream. Seams like an odd choice to make a 3 core right? Well actually they are quad core CPUs where one of the cores has failed quality control so it was disabled and sold as a 3 core chip. Intel probably still does this with the high core count chips like the 18 core and up stuff because the more cores you have the higher the chance is that one of them is bad.

But its not only cores. The variances in the manufacturing process result in some chips running slightly better than others. All the chips that come off the production line are tested beyond there spec to reveal how good the chip is. Depending on the results of the test they might stamp a bad one with a 3.2GHz model number while a good one gets stamped with a 4.4GHz model number. Additionally the particularly good ones also get the K on the end of the partnumber to designate them for overclocking (Intel specific).

GPU vendors do the same thing. Graphics cards have a very high number of processing cores >1000 cores so the chance that all are working is slim. Additionally there are other processing blocks that they have multiple ones like the ROP(Render outout processor), Texture units or memory controllers. For chips that have a lot of these other processing blocks dead are sometimes repuposed as the model lower down. For example the Nvidia GTX 1070 is made from the rejected chips for the GTX 1080 (That actually caused issues at some point i think).
 

Online PA0PBZ

  • Super Contributor
  • ***
  • Posts: 5188
  • Country: nl
Re: When CPU's are made is each one slightly different?
« Reply #6 on: September 05, 2018, 10:49:42 am »
Back in the day of MFM hard disk drives, manufacturers used to actually test and list bad sectors on a list stuck to the top of the drive itself. These were also mapped during low level formatting.

G=C800:5

Some things you never forget it seems  ;)
Keyboard error: Press F1 to continue.
 

Offline Halcyon

  • Global Moderator
  • *****
  • Posts: 5880
  • Country: au
Re: When CPU's are made is each one slightly different?
« Reply #7 on: September 05, 2018, 10:57:00 am »
Back in the day of MFM hard disk drives, manufacturers used to actually test and list bad sectors on a list stuck to the top of the drive itself. These were also mapped during low level formatting.

G=C800:5

Some things you never forget it seems  ;)

I think that's the only time I actually used the DOS debug command.
 

Online PA0PBZ

  • Super Contributor
  • ***
  • Posts: 5188
  • Country: nl
Re: When CPU's are made is each one slightly different?
« Reply #8 on: September 05, 2018, 11:05:03 am »
D 0:400

To see how many serial and parallel ports are available.

« Last Edit: September 05, 2018, 11:19:35 am by PA0PBZ »
Keyboard error: Press F1 to continue.
 

Offline wraper

  • Supporter
  • ****
  • Posts: 17575
  • Country: lv
Re: When CPU's are made is each one slightly different?
« Reply #9 on: September 05, 2018, 11:17:52 am »
For example let’s take intel lineup for familiarity, it’s well known that low to mid end Xeon, i9 and I7 use the same silicon die. They all start out as Xeon, the ECC memory and busses for dual cpu support are tested, if any of the two don’t work the die is downgraded to I9, now the cores are tested, if some of them aren’t working correctly the die is downgraded to i7 etc
Disabling ECC and some other server grade features have nothing to do with die quality. The only things that are disabled due to faults are cache and whole cores (EDIT: I guess hyper-threading as well due to partially faulty cores). Also those often are disabled purely for marketing reasons, to supply necessary amounts of certain price range CPUs. It's like with oscilloscopes, same hardware but different software locked features and bandwidth.
Speed grade can be binned by die quality or simply downgrading fast dies just to meet market demand.
On the other hand faults in NAND is a normal thing and are managed by controller similar to HDDs.
« Last Edit: September 05, 2018, 11:43:50 am by wraper »
 

Offline amyk

  • Super Contributor
  • ***
  • Posts: 8387
Re: When CPU's are made is each one slightly different?
« Reply #10 on: September 05, 2018, 11:40:57 am »
Some of the Xeons have ECC cache: https://www.realworldtech.com/forum/?threadid=161949&curpostid=161951

I believe all of them have parity, however.
 

Offline BeaminTopic starter

  • Super Contributor
  • ***
  • Posts: 1567
  • Country: us
  • If you think my Boobs are big you should see my ba
Re: When CPU's are made is each one slightly different?
« Reply #11 on: September 05, 2018, 12:18:32 pm »
For example let’s take intel lineup for familiarity, it’s well known that low to mid end Xeon, i9 and I7 use the same silicon die. They all start out as Xeon, the ECC memory and busses for dual cpu support are tested, if any of the two don’t work the die is downgraded to I9, now the cores are tested, if some of them aren’t working correctly the die is downgraded to i7 etc
Disabling ECC and some other server grade features have nothing to do with die quality. The only things that are disabled due to faults are cache and whole cores (EDIT: I guess hyper-threading as well due to partially faulty cores). Also those often are disabled purely for marketing reasons, to supply necessary amounts of certain price range CPUs. It's like with oscilloscopes, same hardware but different software locked features and bandwidth.
Speed grade can be binned by die quality or simply downgrading fast dies just to meet market demand.
On the other hand faults in NAND is a normal thing and are managed by controller similar to HDDs.


That makes sense making all the dies the same and purposely down grading them since its the same price to make. But how do they tell the chip "don't run as fast as you can?" without physically altering it? Does it have some sort of secret memory in it like ROMs?


I always thought they did the same thing with memory cards: Make them all 128G sell them at four different price points/capacity when they first come out 128, 64, 32, 16, but somehow disable some of the memory and when the company made enough money/ competitors start lowering price make a 512g and do the same thing. Seems like the SEC/FTC or some regulatory body wouldn't allow this but who knows they don't work for us they work for the sharehoders. It's like how food is priced in this country: Since there are only 4 chains to chose from in each town they all match each others price but up instead of down; unlike down instead of up you had the 4 local businesses to chose from. One store raises the price of eggs by 10 cents then they all raise them a week later by puttingthem on "sale": "Was 1.39 now 1.29". In reality they were 1.19 last week. I caught target doing this with other grocery stores many times. I try NOT to buy things on "sale".
Max characters: 300; characters remaining: 191
Images in your signature must be no greater than 500x25 pixels
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17086
  • Country: us
  • DavidH
Re: When CPU's are made is each one slightly different?
« Reply #12 on: September 05, 2018, 12:27:43 pm »
Yes. ECC is not a big part of the die. Intel does NOT do whole chip ECC. If SRAM cache is F*ed, you get F*ed. Intel ECC only applies to DRAM controller. Once signal goes into L3/L4 interconnection, there's noting protecting data integrity.

Cache structures are usually protected by ECC or just parity if the data can be fetched again like the instruction cache.  The reason for this is that the highest performance SRAM has a soft error rate orders of magnitude higher than DRAM.

Large regular structures like SRAM may have redundant rows to increase yield.  As I recall, one of the reasons the original AMD K6-3 had poor yield was lack of enough redundancy in the level 3 SRAM cache.

Quote
A good example are some i3 and Pentium chips with ECC -- they are no where close to supposed i7 die quality, but they have ECC enabled, for a very specific market -- NAS.

This has nothing to do with anything except market segmentation.  Intel disables ECC on processors intended for the consumer market which would otherwise compete with more expensive workstation and server processors.  AMD does not so I have been using AMD since I retired my Intel Pentium 4 workstation.

That makes sense making all the dies the same and purposely down grading them since its the same price to make. But how do they tell the chip "don't run as fast as you can?" without physically altering it? Does it have some sort of secret memory in it like ROMs?

They do not generally include any programmable memory type structures because it would require a more complex process but in the recent past, jumpers were included as part of the packaging.  I assume they are still using jumpers in one form or another.  The clock multiplier configuration is controlled in this way.

Some features may be tied to the support chips.  With perhaps some recent minor exceptions, Intel for instance only allows ECC with workstation south bridges so you must have a workstation board and a workstation processor.

Quote
I always thought they did the same thing with memory cards: Make them all 128G sell them at four different price points/capacity when they first come out 128, 64, 32, 16, but somehow disable some of the memory and when the company made enough money/ competitors start lowering price make a 512g and do the same thing.

That would be silly to do because the cost of the chip is roughly proportional to area.  128G chips configured as 16G chips still cost the same as 128G chips.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 5022
  • Country: si
Re: When CPU's are made is each one slightly different?
« Reply #13 on: September 05, 2018, 12:34:27 pm »
They do that sort of binning for memory chips to some extent.

The process for making flash memory is really pushing density to the limits to get the most capacity for a given area of silicon. So you could get the reject 128GB chips sold as 64GB ones to make some money off the lost yield, especially if the particular batch had bad luck. But they certainly wouldn't be disabling 128GB chips to make them look like 16GB. A flash chip with 1/8th the memory size also takes 1/8th the silicon area to create so they can get 8 times more chips from the same wafer. Tho in practice its not quite 8 times since the memory controller takes up a little bit of area and the die cutting process destroys some silicon area around the die.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17086
  • Country: us
  • DavidH
Re: When CPU's are made is each one slightly different?
« Reply #14 on: September 05, 2018, 12:43:00 pm »
Some of the Xeons have ECC cache: https://www.realworldtech.com/forum/?threadid=161949&curpostid=161951

I believe all of them have parity, however.

I see, so despite most CPUs don't have cache ECC, they have parity so if bad things happen, they will just flush the cache and ask for a new copy from the RAM? That sound very reasonable.

Parity can only be used with write through cache where a copy of the data is always available somewhere else as with instruction cache.

Write back cache must use ECC unless it has a low enough soft error rate which is generally not the case.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9931
  • Country: us
Re: When CPU's are made is each one slightly different?
« Reply #15 on: September 05, 2018, 04:17:26 pm »
There was a time when Control Data computers (6400, 6600) got us to the Moon (July 20, 1969).  The machines didn't even have parity checks much less ECC.

 

Offline SparkyFX

  • Frequent Contributor
  • **
  • Posts: 676
  • Country: de
Re: When CPU's are made is each one slightly different?
« Reply #16 on: September 05, 2018, 04:35:46 pm »
There was a time when Control Data computers (6400, 6600) got us to the Moon (July 20, 1969).  The machines didn't even have parity checks much less ECC.
But redundancy is quite usual in aeronautics, it allows to take 2 out of 3 signals as true. No idea if the moon lander had several backups in place, though.
Support your local planet.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 17086
  • Country: us
  • DavidH
Re: When CPU's are made is each one slightly different?
« Reply #17 on: September 05, 2018, 06:05:53 pm »
There was a time when Control Data computers (6400, 6600) got us to the Moon (July 20, 1969).  The machines didn't even have parity checks much less ECC.

It is certainly possible to make reliable systems without ECC; just lower the soft error rate enough.  But this comes at the expense of speed and power.

One of the big advantages of ECC is in addition to correcting errors, it also allows notification that there was an error and where.  Otherwise how would you know short of data corruption detected later?
 

Offline coppercone2

  • Super Contributor
  • ***
  • Posts: 10487
  • Country: us
  • $
Re: When CPU's are made is each one slightly different?
« Reply #18 on: September 05, 2018, 07:25:12 pm »
this makes is a security nightmare. I destroy my old hard drives with a pick axe

not gonna pay electrical power to melt it or spend alot of time on it, wipe and axe will keep most things away, otherwise use a kiln
« Last Edit: September 05, 2018, 07:42:24 pm by coppercone2 »
 

Offline jmelson

  • Super Contributor
  • ***
  • Posts: 2821
  • Country: us
Re: When CPU's are made is each one slightly different?
« Reply #19 on: September 05, 2018, 10:09:14 pm »
There was a time when Control Data computers (6400, 6600) got us to the Moon (July 20, 1969).  The machines didn't even have parity checks much less ECC.
But redundancy is quite usual in aeronautics, it allows to take 2 out of 3 signals as true. No idea if the moon lander had several backups in place, though.
Absolutely none!  They had one of the first, ever, computers built with ICs.  The size of a big shoe box, had to run continuously for THREE DAYS off batteries while sitting on the moon!  No way they could have a triple-redundant lock-step voting system with those constraints.

They DID have some pretty rigorous contingency plans for what to do if one of the computers broke, though.  If they got settled on the moon and the computer died, they could use the written-down coordinates they took upon landing, and the computer on the Apollo command module could tell them when to launch, that would be radioed down as the command module orbited.  I think if the command module computer died, the LEM could still link up to it by itself, then they could use the LEM computer to handle the orientation and timing to fire the retro engine.  I think they actually did this on the Apollo 13 (right #?) mission where the Oxygen tank blew and they lost electrical power on the command and service modules.

Jon
 

Offline Cerebus

  • Super Contributor
  • ***
  • Posts: 10576
  • Country: gb
Re: When CPU's are made is each one slightly different?
« Reply #20 on: September 05, 2018, 10:48:28 pm »
There was a time when Control Data computers (6400, 6600) got us to the Moon (July 20, 1969).  The machines didn't even have parity checks much less ECC.

It is certainly possible to make reliable systems without ECC; just lower the soft error rate enough.  But this comes at the expense of speed and power.

One of the big advantages of ECC is in addition to correcting errors, it also allows notification that there was an error and where.  Otherwise how would you know short of data corruption detected later?

The CDC 6600 was a 2 MIPS machine that consumed 30 kW. So, yeah, a tad slower than modern toothbrushes* and greedier on the juice too.  :) It had a 100ns minimum memory access time, which was pretty damn fast for it's time.

As to the claim that it didn't use parity or other error checking, I find that unlikely as it was pretty ubiquitous in mainframe designs of the time, but I'm not going to trawl through the documentation (available on-line and massively detailed compared to modern non-documentation) just to prove a strong suspicion.

*A Braun Sonicare Platinum toothbrush has a PIC16F1516 microcontroller in it rated at 5 MIPS and runs from a battery that fits inside the hand-held toothbrush. It is not recommended that you use your toothbrush to plot earth-lunar orbit parameters.
Anybody got a syringe I can use to squeeze the magic smoke back into this?
 
The following users thanked this post: hamster_nz

Offline duak

  • Super Contributor
  • ***
  • Posts: 1047
  • Country: ca
Re: When CPU's are made is each one slightly different?
« Reply #21 on: September 06, 2018, 02:14:37 am »
From Wikiquote on Seymour Cray and the CDC 6600 & 7600:

"Parity is for farmers."

    On why he left memory error-detecting hardware out of the CDC 6600.

"I learned that a lot of farmers buy computers."

    After he did include error-detecting hardware in the CDC 7600.

Back when mainframe computers had the CPU in one cabinet, the memory in another and I/O channels in various others it made a lot of sense to carry parity all the way from data in to data out.  I read somewhere that was a sign that the designers were serious about reliable computing.  I've never used an old mainframe so I don't know personally if it makes difference.  I did use 800/1600 tapes and the drives would start to throw soft parity errors when it was time to clean the heads.

Cheers,
 

Offline Tom45

  • Frequent Contributor
  • **
  • Posts: 556
  • Country: us
Re: When CPU's are made is each one slightly different?
« Reply #22 on: September 06, 2018, 03:15:01 am »

...

As to the claim that it didn't use parity or other error checking, I find that unlikely as it was pretty ubiquitous in mainframe designs of the time, but I'm not going to trawl through the documentation (available on-line and massively detailed compared to modern non-documentation) just to prove a strong suspicion.

The 6600 definitely didn't have memory parity checking.

I worked with the 6600 in the late 60's and early 70's. When the Boeing 747 entered service in 1970, a coworker said he wasn't going to fly on it because Boeing had done the aeronautical design using a 6600 without parity. So he didn't trust the design results, or the plane. Back then memory reliability wasn't up to current standards.

History shows that Boeing got it right anyway.
 

Offline BeaminTopic starter

  • Super Contributor
  • ***
  • Posts: 1567
  • Country: us
  • If you think my Boobs are big you should see my ba
Re: When CPU's are made is each one slightly different?
« Reply #23 on: September 06, 2018, 08:47:20 am »
The process for making flash memory is really pushing density to the limits to get the most capacity for a given area of silicon.

Flash chips is a particularly dirty market. In this market, there's nothing goes to waste.
Best chips with little disabled blocks go to enterprise SSDs, lower grade chips go to consumer SSDs and UFS chips.
Usually OEMs don't get to buy those chips unless they pay a lot. Those are either for industrial OEMs or just house brands of flash manufacturers (Crucial, etc.).

Average chips that meets minimum industry standard go to high quality thumb drives and memory cards.
The above are called original chips or A chips in China.

Rejected chips with reasonable performance get rebranded by liquidators (Spectek) or OEMs (Kingston) for cheap flash drives or rubbish SSDs. This is called white chips in China.
Absolute garbage chips went to lowest tier Chinese SSDs (Galax, Colorful, Asguard, etc.). This is called black chips in China.

The real absolute worst chips are black chips with fake marking. They are not used by any reputable brands, not even Galax (ironically, they make some of the best GPU cards and the worst SSDs).
Those fake chips are used in non-branded DIY SSDs or used in online distribution-only computers (where everything is of lowest quality).
No wonder how can they roll out 1080ti+8700k+16G/256G water cooled computers for less than $1500.
Max characters: 300; characters remaining: 191
Images in your signature must be no greater than 500x25 pixels
 

Offline BeaminTopic starter

  • Super Contributor
  • ***
  • Posts: 1567
  • Country: us
  • If you think my Boobs are big you should see my ba
Re: When CPU's are made is each one slightly different?
« Reply #24 on: September 06, 2018, 09:20:18 am »
The process for making flash memory is really pushing density to the limits to get the most capacity for a given area of silicon.

Flash chips is a particularly dirty market. In this market, there's nothing goes to waste.
Best chips with little disabled blocks go to enterprise SSDs, lower grade chips go to consumer SSDs and UFS chips.
Usually OEMs don't get to buy those chips unless they pay a lot. Those are either for industrial OEMs or just house brands of flash manufacturers (Crucial, etc.).

Average chips that meets minimum industry standard go to high quality thumb drives and memory cards.
The above are called original chips or A chips in China.

Rejected chips with reasonable performance get rebranded by liquidators (Spectek) or OEMs (Kingston) for cheap flash drives or rubbish SSDs. This is called white chips in China.
Absolute garbage chips went to lowest tier Chinese SSDs (Galax, Colorful, Asguard, etc.). This is called black chips in China.

The real absolute worst chips are black chips with fake marking. They are not used by any reputable brands, not even Galax (ironically, they make some of the best GPU cards and the worst SSDs).
Those fake chips are used in non-branded DIY SSDs or used in online distribution-only computers (where everything is of lowest quality).
No wonder how can they roll out 1080ti+8700k+16G/256G water cooled computers for less than $1500.

It's a shame that companies have to compete with this. It was fine when you went to a store and had the option of buying the cheap stuff made in china or spending a few dollars more to get stuff that was made else where and would last. But now you don't have that choice; unscrupulous companies found they could just brand the shit products from china and put the higher price tags on them. Since our market only goes off a companies stock price the other companies that didn't want to do this had to or face going under. This along with consumer ignorance/laziness started the race to the bottom. I used to try very hard to avoid buying things made in china but the last time I did that was for a pair of running shoes made in America. It literally took all day and when I did find the pair I wanted they only made one color in America, the one I didn't want but bought anyways. Now I still look at label but mainly to see if it is made in china is this product going to harm me because it contains toxic metals or chemicals which even the name brands can because there are so many steps in the supply chain. The only thing I will not compromise on now is hormones and food. Anyways....


Quote
They do that sort of binning for memory chips to some extent.

The process for making flash memory is really pushing density to the limits to get the most capacity for a given area of silicon. So you could get the reject 128GB chips sold as 64GB ones to make some money off the lost yield, especially if the particular batch had bad luck. But they certainly wouldn't be disabling 128GB chips to make them look like 16GB. A flash chip with 1/8th the memory size also takes 1/8th the silicon area to create so they can get 8 times more chips from the same wafer. Tho in practice its not quite 8 times since the memory controller takes up a little bit of area and the die cutting process destroys some silicon area around the die.


I thought when they made chips like memory chips it was like photo copying: where sure it costs more money to layout a higher Gb; say double, but if you have four size chips you want to market it makes much more sense to make the larger gb size and have one mask/layout then to have four. If I owned a memory chip company I would make the largest I could and start selling them at the highest price I could for as long as I could until competition forces me to start lowering the price. Memory isn't a normal market its artificial and if you apply common sense to these kinds of things you are making an error. So I'm not doubting you I'm asking if you are applying common sense or if you know this for a fact. Yes its going to cost way more to make a tb then a gb card but the devil is in the details when you go to one or two sizes bigger not orders of magnitude. So if you own a memory company and you design the new tb card you don't want to start selling it right away at your normal margins or even selling it at all, you want to hold it as high as possible until competition forces you down.
 IE if I just made a new tb card and the largest size currently is 128gb, and if I could "detune" it down I might release to the market 512gb (detuned to 50%) as the max and tell consumers "we are working on a tb prototype for next year" then only release it's full potential when you have to. Business is tricky and when companies say "innovation" it's not a new invention it's a new scheme for more profit. In one of my businesses you could sell more if you offered a "buy one get one free" sale and just charge 80% more for the first one, rather then sell both at 20% off normal price. You are still giving the customer a deal, and still cutting 20% off your profits, but your volume increased hopefully more then 20%. When I used to sell Freon/r134a I would stock up at 80.00$ a bottle in winter then sell it for 120$ ($130 regular price) in the summer instead of trying to sell for $90.00 in the winter.
Max characters: 300; characters remaining: 191
Images in your signature must be no greater than 500x25 pixels
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf