Author Topic: Wasn't expecting this... C floating point arithmetic  (Read 16892 times)

0 Members and 1 Guest are viewing this topic.

Offline iMo

  • Super Contributor
  • ***
  • Posts: 5240
  • Country: bj
Re: Wasn't expecting this... C floating point arithmetic
« Reply #100 on: April 06, 2018, 10:49:56 pm »
The banking apps use something like 34 decimal digits math (supported by the math co-processors ie. in P6+..).
Btw my wp-34s calculator uses decNumber too :)
Many of the older HP and TI calculators worked with decimal representation (and decimal CPUs), unfortunately with pretty low precision (compared to say 50 digits used by the wp-34s).
Readers discretion is advised..
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23096
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #101 on: April 06, 2018, 10:51:25 pm »
wp34 that's the one. Not sure where I got WP81 from! Need more coffee :)
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20727
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Wasn't expecting this... C floating point arithmetic
« Reply #102 on: April 06, 2018, 10:53:12 pm »
For banking applications, the best way is using integers representing the number of cents.

32-bit integers are not long enough to hold accounting numbers any more, but 64-bit integers still provide enough room, unless we get run-away inflation that is.

From which we can infer that you have not been involved in specifying arithmetic for banking systems.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline IanB

  • Super Contributor
  • ***
  • Posts: 12395
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #103 on: April 06, 2018, 10:57:12 pm »
Re: Banking. Can anyone explain why decimal fractions and decimal rounding are better than binary fractions and binary rounding? (Since interest, fees and tax calculations must certainly incur fractions that need rounding.)
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23096
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #104 on: April 06, 2018, 11:09:36 pm »
Re: Banking. Can anyone explain why decimal fractions and decimal rounding are better than binary fractions and binary rounding? (Since interest, fees and tax calculations must certainly incur fractions that need rounding.)

It's not so much about rounding but precision. All financial values are rational and the denominator is 10^N where N defines the precision constant. Place-value systems or encodings (decimal) that support that denominator class consolidate with the results we desire and observe on paper. This comes from us having 10 fingers and being dumbasses for several thousand years. If we had 8 fingers, like in the simpsons, perhaps binary fractions would be better.

Incidentally there is no standard rounding rules. They are arbitrary so have to be programmed for the use case in question. This is why we wrote our own decimal numeric system which has precise, predictable performance and allows operations to have rounding algorithms applied on a case by case basis.

Good book on the history of this and why etc is Jan Gullberg's "Mathematics: from the birth of numbers" which describes the history of calculation, number systems etc as well as, well pretty bloody much everything. Wonderful book, written by a surgeon, not a mathematician, so it actually makes sense.

Edit: Might be the half bottle of wine bending my brain,  but it made me snigger thinking the above through.  99p shops would be called something like 111111 shops (because they were trying to undercut 1000000 shops) if we used binary! Perhaps 10 fingers was right after all.
« Last Edit: April 06, 2018, 11:20:36 pm by bd139 »
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4313
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #105 on: April 06, 2018, 11:48:09 pm »
Quote
So how come there is not a standard library for [symbolic rational numbers ala Macsyma] ? and we are all still effing around with ieee7-whatever ?
Because it's not generally required or even useful, and rather expensive, computationally?
Banking, which everyone is using as an example, seems to be some bastard union of integer "values" and various "rates" that aren't ("3.4% interest, compounded continuously"?)  Their rules look more aimed at preventing "cheating" than preserving accuracy in an absolute sense.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3248
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #106 on: April 06, 2018, 11:52:50 pm »
Number cents isn't enough precision sometimes. Some unit values are far less than one cent so you need variable precision. If you look at the method I described here, it allows variable precision with persistence:

https://www.eevblog.com/forum/microcontrollers/wasn_t-expecting-this-c-floating-point-arithmetic/msg1469544/#msg1469544

You can just use the smallest unit, whatever it is, as one. And make sure that the integer is big enough to represent the biggest amount.

No matter what you do, the rounding problem remains. Say, you have 1000000 users, and you want to calculate interest for them. You cannot tell them that their interest is 0.534566 cents, so you need to round it to whole cents somehow. If you use correct mathematical rounding, there will be an error in the total interest, so you either have to live with the error, or you will have to go back and correct the rounding for some of the clients. There's no other way. Of course, you can round the numbers down which brings you extra little profit every time (averaging 0.5 cents per client, $5,000 if you have 1000000 clients), but this doesn't give you the exact match neither. Either way, the rounding problem is fundamental and cannot be solved by using higher precision internally.

 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 5240
  • Country: bj
Re: Wasn't expecting this... C floating point arithmetic
« Reply #107 on: April 07, 2018, 12:17:31 am »
Customer XY
Your interest          0.00534566 USD
Rounded interest    0.01 USD
Transaction fees     1.00 USD
Total                   - 0.99 USD
 ;)
Readers discretion is advised..
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 9527
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #108 on: April 07, 2018, 12:23:15 am »
Re: Banking. Can anyone explain why decimal fractions and decimal rounding are better than binary fractions and binary rounding? (Since interest, fees and tax calculations must certainly incur fractions that need rounding.)
Historically, there has been a feeling that if financial results from a computer do not exactly match what a human would get with pencil and paper, there would be lots of complaints from humans who have checked figures on computer printouts. I don't know if that ever turned out to be the case in practice.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28059
  • Country: nl
    • NCT Developments
Re: Wasn't expecting this... C floating point arithmetic
« Reply #109 on: April 07, 2018, 01:02:53 am »
Number cents isn't enough precision sometimes. Some unit values are far less than one cent so you need variable precision. If you look at the method I described here, it allows variable precision with persistence:

https://www.eevblog.com/forum/microcontrollers/wasn_t-expecting-this-c-floating-point-arithmetic/msg1469544/#msg1469544
You can just use the smallest unit, whatever it is, as one. And make sure that the integer is big enough to represent the biggest amount.
That doesn't work. Check component prices. I've seen resistors and capacitor prices with 4 or 5 digits after the decimal point. The thing is that the rounding should happen at the end and not in between at every stage.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline bson

  • Supporter
  • ****
  • Posts: 2462
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #110 on: April 07, 2018, 01:42:52 am »
(1/3)*3, answer 1.  This is correct mathematically, but not what floating point arithmetic gives you.

A computer should NOT be able to answer (1/3)*3 correctly using pure floating point arithmetic.  But it does give the correct answer.
You're converting a constant the compiler can format at compile time, so this is likely a bug in the compile-time printf implementation.

Try:

Code: [Select]
#include <stdio.h>

int main() {
  const float a = 1.0/3;
  const float b = 1.0 - 3.0*a;

  printf("b=%g\n", b);

  return 0;
}

Code: [Select]
$ gcc  -O0 -o foo3 foo3.c
$ ./foo3
b=-2.98023e-08
$ gcc  -O3 -o foo3 foo3.c
$ ./foo3
b=-2.98023e-08
$ gcc  -Os -o foo3 foo3.c
$ ./foo3
b=-2.98023e-08

On the other hand, the following produces "b=0":

Code: [Select]
#include <stdio.h>

int main() {
  const float a = 1.0/3;
  const float b = 3.0*a;

  printf("b=%g\n", 1.0 - b);

  return 0;
}

Actually, on second thought I wonder if it's not related to the second form performing 1.0 - b as a double and passing that to printf, while the former calculates b as a float, then promotes that to double for printf...
« Last Edit: April 07, 2018, 01:44:50 am by bson »
 

Offline IanB

  • Super Contributor
  • ***
  • Posts: 12395
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #111 on: April 07, 2018, 03:04:44 am »
Actually, on second thought I wonder if it's not related to the second form performing 1.0 - b as a double and passing that to printf, while the former calculates b as a float, then promotes that to double for printf...

No, it's simply a peculiarity of binary arithmetic. For example, see below. There is no funny rounding or type conversion going on here:


 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3248
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #112 on: April 07, 2018, 03:11:39 am »
You can just use the smallest unit, whatever it is, as one. And make sure that the integer is big enough to represent the biggest amount.
That doesn't work. Check component prices. I've seen resistors and capacitor prices with 4 or 5 digits after the decimal point. The thing is that the rounding should happen at the end and not in between at every stage.

It certainly does. Scale it so that an integer 1,000,000 represents one dollar, and you've got 6 digits after the decimal point. Also note that this  eliminates binary vs. decimal controversy.

« Last Edit: April 07, 2018, 04:37:53 am by NorthGuy »
 

Offline bson

  • Supporter
  • ****
  • Posts: 2462
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #113 on: April 07, 2018, 04:17:39 am »
A quick test.

Code: [Select]
#include <stdlib.h>

int main() {
  const float c = 1.0;

  abort();
}

Then:

Code: [Select]
: //trumpet ~ ; gcc -g -O0 -o foo foo.c
: //trumpet ~ ; gdb ./foo
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7_4.1
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /home/bson/foo...done.
(gdb) r
Starting program: /home/bson/./foo

Program received signal SIGABRT, Aborted.
0x00007ffff7a4d1f7 in raise () from /usr/lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7_4.2.x86_64
(gdb) up
#1  0x00007ffff7a4e8e8 in abort () from /usr/lib64/libc.so.6
(gdb) up
#2  0x0000000000400543 in main () at foo.c:6
6   abort();
(gdb) p c
$1 = 1
(gdb) whatis c
type = const float
(gdb) p &c
$2 = (const float *) 0x7fffffffe12c
(gdb) p $sp
$3 = (void *) 0x7fffffffe120
(gdb) p/cx *(char*)&c @ 4
$4 = {0x0, 0x0, 0x80, 0x3f}
(gdb) set var c = 1./3
(gdb) p/cx *(char*)&c @ 4
$5 = {0xaa, 0xaa, 0xaa, 0x3e}
(gdb) p c
$6 = 0.333333313
(gdb) set var *(int*)&c = 0x3eaaaaab
(gdb) p c
$7 = 0.333333343
(gdb) p c * 3.0
$8 = 1.0000000298023224
(gdb) set var *(int*)&c = 0x3eaaaaaa
(gdb) p c
$9 = 0.333333313
(gdb) p c * 3.0
$10 = 0.99999994039535522
(gdb)

From this you can see the 1./3 does not evenly round in binary, and hence there is a rounding error.
Multiplying it by 3 multiplies the rounding error, but it just still happens to be less than 1/2 lsb in a binary32 IEEE 754 float.
When doing the conversion in a parameter to printf, because the argument for a function call, especially a variadic one, is binary64 it gets promoted to binary64 (which is the C double).  And in double arithmetic the float-size rounding error becomes visible.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4276
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #114 on: April 07, 2018, 08:49:08 am »
Quote
So how come there is not a standard library for [symbolic rational numbers ala Macsyma] ? and we are all still effing around with ieee7-whatever ?
Because it's not generally required or even useful, and rather expensive, computationally?
Banking, which everyone is using as an example, seems to be some bastard union of integer "values" and various "rates" that aren't ("3.4% interest, compounded continuously"?)  Their rules look more aimed at preventing "cheating" than preserving accuracy in an absolute sense.

Most of my banking software experience was moving data around, rather than calculating it, but in the times I did calculate things there were specs so it was done consistently.

Surprisingly the specs were not that demanding on "how" you got the results, expect that the actual calculations should use "double precision", the specs were usually highly focused on precision and rounding at "point of record".

At the point you record a value into a so called record-of-truth it decouples from any previous calculations done to it.  You can't for example calculate the interest on an account as $0.12345, put $0.12 on their statement and then go ahead and increase the balance by $0.12345.  Similarly you can't do that the other way either put interest $0.12345 shown on the statement but only actually increase the balance by $0.12.

So what you round and record on a ledger/statement is the legal value.

Below is mostly assumption...

I haven't done compound interest calculations in a bank, but I would assume that when they say the interest is calculated daily and added monthly, the daily calculation "could" use higher precision than a cent/pence but the monthly aggregate amount added to your account will be in cent/pence rounded and the cent/pence account balance used in the next month.

If you open a savings account and put $1 into it and it has a 1% interest EPR rate and it says that interest is calculated daily but added yearly, which is common, then the daily interest is a very small number compared to cent/pence ($0.00002739726027... per day).  If they round those daily figures to cents/pence they will get 0.00 daily and your yearly interest would be 0.00 when they add up.  I have had savings accounts with virtually nothing in them and accrued interest.  Of course there is nothing saying that they actually calculate interest "daily" in real time.  They can of course loop through the account at the end of the year and take the closing (or peak if you are lucky) balance each day, keep a double precision number in memory and deposit the aggregate interest at the end of the year.... as a rounded cent/pence amount.  So potentially you could see floating point approximation errors in your interest.

It also opens questions about compounding resolution as well.  This is totally an assumption, but when they say they calculate the interest daily, but add it monthly, the compounding resolution would be monthly surely.  Non-compound per day based on the account balance, aggregated and added to the account at the end of the month where it then is considered in the next days calculation.

As a challenge you could of course download your bank statement and see if you can calculate the interest yourself, see how close you get to the banks figure with different techniques.
« Last Edit: April 07, 2018, 08:54:23 am by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8550
  • Country: us
    • SiliconValleyGarage
Re: Wasn't expecting this... C floating point arithmetic
« Reply #115 on: April 09, 2018, 05:27:32 am »
so if we have this imprecise math libraries : how the hell can we calculate the 27 millionth decimal of PI ? what kind of computational floating point package allows for that ?
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline agehall

  • Frequent Contributor
  • **
  • Posts: 390
  • Country: se
Re: Wasn't expecting this... C floating point arithmetic
« Reply #116 on: April 09, 2018, 05:46:07 am »
so if we have this imprecise math libraries : how the hell can we calculate the 27 millionth decimal of PI ? what kind of computational floating point package allows for that ?

Algorithms. It's not like that is done in one single computation or anything. You can compute anything on a computer as long as you understand how to work around the deficiencies in it. One such way is to simply construct algorithms that are adapted to computers using techniques like fixed point math and others that have been mentioned in this thread.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6947
  • Country: fi
    • My home page and email address
Re: Wasn't expecting this... C floating point arithmetic
« Reply #117 on: April 09, 2018, 05:22:38 pm »
If you use an architecture where float and double are IEEE-754 binary32 and binary64, respectively, and integer and floating-point byte order is the same, you might find the attached float-bits command-line utility useful. It should compile with any C99 or later C compiler. In particular, it works fine on 32-bit and 64-bit Intel/AMD architectures.

Simply put, it shows the binary representation of any floating-point number, or the sum, difference, product, or division of a pair of numbers. Run it without arguments, and it shows the usage.

If you use Linux, you can compile and install it using e.g. gcc -Wall -O2 float-bits.c -o float-bits && sudo install -o root -g root -m 0755 float-bits /usr/local/bin/.

If we run float-bits -f 1/3 the output is
Code: [Select]
1/3 = 0.3333333432674408
  0 011111110 (1)0000000000000000000000
/ 0 100000001 (1)0000000000000000000000
= 0 011111010 (1)1010101010101010101011

With regards to the issue OP is having, the key is to look at the result: it is rounded up. (The mathematical evaluation rules in C are such that when the result is stored in a variable, or the expression is cast to a specific numeric type, the compiler must evaluate the value at the specified precision and range. This means that it is not allowed to optimize away entire operations.)

Note that if we run float-bits -f 0.99999997/3 the output is
Code: [Select]
0.99999997/3 = 0.3333333134651184
  0 011111101 (1)1111111111111111111111
/ 0 100000001 (1)0000000000000000000000
= 0 011111010 (1)1010101010101010101010

So, the three numbers closest to one third a single-precision floating-point number can represent are float-bits -f 0.33333332 0.33333334 0.33333336:
Code: [Select]
0.33333332: 0 011111010 (1)1010101010101010101010
0.33333334: 0 011111010 (1)1010101010101010101011
0.33333336: 0 011111010 (1)1010101010101010101100

Multiplying them by three (float-bits -f 3x0.33333332 3x0.33333334 3x0.33333336) yields
Code: [Select]
3x0.33333332 = 0.9999999403953552
  0 100000001 (1)0000000000000000000000
x 0 011111010 (1)1010101010101010101010
= 0 011111101 (1)1111111111111111111111
3x0.33333334 = 1.0000000000000000
  0 100000001 (1)0000000000000000000000
x 0 011111010 (1)1010101010101010101011
= 0 011111110 (1)0000000000000000000000
3x0.33333336 = 1.0000001192092896
  0 100000001 (1)0000000000000000000000
x 0 011111010 (1)1010101010101010101100
= 0 011111110 (1)0000000000000000000001

Essentially, when one writes 3.0f * (float)(1.0f / 3.0f) or something equivalent in C (using C99 or later rules), two implicit rounding operations occur. The first one rounds one third to the nearest value representable by a binary32 float up, and the second one rounds the slightly-over-one value to the nearest value representable by a binary32 float, down to exactly one. (Remember that these rounding operations operate on the floating-point number, and can at most add or subtract one unit in the least significant place.)

The answer to OP's question is then that this happens, because when implemented in floating-point math, there are two rounding operations done, and using the default rounding rules the two happen to cancel each other out, giving the unexpected, mathematically correct value.

Floating-point math is still exact math, it's just that after each operation, there is an implicit rounding to the nearest value representable by the used type. (However, there are "unsafe math optimizations" some compilers can do, which fuse multiple operations to one; and the FMA intrinsics are designed to do a fused multiply-add where only one rounding happens, at the end.)
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28059
  • Country: nl
    • NCT Developments
Re: Wasn't expecting this... C floating point arithmetic
« Reply #118 on: April 09, 2018, 05:28:44 pm »
The answer to OP's question is then that this happens, because when implemented in floating-point math, there are two rounding operations done, and using the default rounding rules the two happen to cancel each other out, giving the unexpected, mathematically correct value.
IMHO there is nothing unexpected here. When you use any kind of math on a computer you know the precission is limited so you have to figure out how many meaningfull digits you need and round the result accordingly. That way you will always get the result you expect.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8550
  • Country: us
    • SiliconValleyGarage
Re: Wasn't expecting this... C floating point arithmetic
« Reply #119 on: April 09, 2018, 05:54:26 pm »
The answer to OP's question is then that this happens, because when implemented in floating-point math, there are two rounding operations done, and using the default rounding rules the two happen to cancel each other out, giving the unexpected, mathematically correct value.
IMHO there is nothing unexpected here. When you use any kind of math on a computer you know the precission is limited so you have to figure out how many meaningfull digits you need and round the result accordingly. That way you will always get the result you expect.
it would be fun to have logic gates where
1 and 1 is 99.999999999987485 % of the times 1
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6947
  • Country: fi
    • My home page and email address
Re: Wasn't expecting this... C floating point arithmetic
« Reply #120 on: April 09, 2018, 06:32:19 pm »
IMHO there is nothing unexpected here.
I was trying to refer to how OP felt it was unexpected.  To those who use and apply C and IEEE-754/854 rules, there is nothing surprising or unexpected here.

My main point was that the floating-point math is well defined and exact (in the sense that the standard does not allow more than half an ULP of error for most operations; which means that the operations yield the exact same bit patterns on all standards-compliant architectures). It's just that the implicit rounding operations done (and required by standards) after each operation throw people off.

If you look at the Kahan summation algorithm at Wikipedia, check the Possible invalidation by compiler optimization section. With current compilers, even with the most aggressive optimizations used, one only needs a couple of casts to implement the algorithm correctly. This is because casts (expressions of form (double)(expression)) limit the precision and accuracy to the specified type (double), just like the implicit rounding I've mentioned. There is no need to try and use extra temporary variables or such.

There are other rules/functions that are extremely useful, too. For example, if you need to calculate an expression where a denominator may become zero, rather than test it explicitly beforehand, you can simply do the division, and check the result using isfinite() that the operation did not fail due to the divisor being too close to zero. (Unfortunately, this runs afoul of the "unsafe-math-optimizations" options for many compilers.) All you need to do is ensure math exceptions are disabled (using fesetenv()), so that your process won't keel over due to a floating-point exception.

All of this applies to microcontrollers, too, except that some settings might be hardcoded (and no fesetenv() available), depending on the base libraries.

Without hardware floating-point support, fixed-point math tends to be much faster than floating-point. (The logical value v is represented by an N-bit signed integer, round(v×2Q), where Q < N is the number of integer bits.) Operations on fixed-point numbers still involve implicit rounding  after every operation, but the lack of exponent (used in floating-point types) makes it much easier to implement.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #121 on: April 09, 2018, 08:35:38 pm »
it would be fun to have logic gates where
1 and 1 is 99.999999999987485 % of the times 1

I'd love to have a BER that low!

Indeed, as long as BER < 0.5, one can stack an arbitrary number of gates, error correction blocks, etc., to achieve arbitrarily high certainty.  It's the same as losing versus winning infinite money from gambling when the odds are only slightly in (or against) your favor.

Sooner or later we will have to understand stochastic computing: whether through the continued miniaturization of conventional logic with ever-shrinking thresholds, or the development of quantum computing, where errors are introduced by environmental (thermal) perturbations to the system state.  (That is, to implement a so-and-so-qubit calculation on a crappy computer, throw in however many times more qubits as error correcting functions, and pump the whole system.  Effectively, you'll be sinking the excess heat out of the error correcting blocks as information entropy, pushing the intended calculation towards its desired state.)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf