Author Topic: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware  (Read 4670 times)

0 Members and 2 Guests are viewing this topic.

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6514
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #50 on: June 20, 2024, 10:29:54 am »
Why do you state that with exponents greater tha ±1 the other approach is wrong?
Because of the implicit MSB mantissa bit.  If you examine the code in bench.c above, you'll see the lengths I go to handle it right.

However, your prodding made me think harder.  :-+

For positive floats, from +0 to +inf, inclusive, the storage representation of any two float values compares the same as the floats themselves. For negative values, we only need to do a subtraction to convert them, so that the storage representations as two's complement integers will compare the same way as the original floats, for all numeric floats –– all except NaNs, which we don't care at all anyway.

That is proof that incrementing/decrementing in ULPs covers the entire finite range; from zero to subnormals to normals up to and including +inf, and the same for negative values.

Thus, are obviously right, and I was wrong. :palm:

To be specific, we can calculate the difference in ULPs simply with
Code: [Select]
typedef union {
    uint32_t  u;
    int32_t  i;
    float     f;
} word32;

int32_t difference(const float value, const float compared_to) {
    word32  v = { .f = value };
    word32  c = { .f = compared_to };
    if (v.u & 0x80000000) v.u = 0x80000000 - v.u;
    if (c.u & 0x80000000) c.u = 0x80000000 - c.u;
    return v.i - c.i;
}
with the following caveats:
  • Bogus results if either is a NaN (when (.u & 0x7F800000) == 0x7F800000 && (.u & 0x007FFFFF) != 0x00000000)
  • If one is zero and the other is not, the difference is in the ULPs of the non-zero one
We even know that if one is negative and the other positive, the difference is still sensible: the number of unique representable float values separating the two.

Also, "number of unique float values between the calculated value and the correct value" is a much more understandable definition (than "difference in ULPs"), is fully valid, and requires no caveats other than NaNs not being valid values.
« Last Edit: June 20, 2024, 02:06:50 pm by Nominal Animal »
 

Offline AVI-crak

  • Regular Contributor
  • *
  • Posts: 132
  • Country: ru
    • Rtos
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #51 on: June 20, 2024, 01:24:02 pm »
https://www.desmos.com/calculator/8zaitzwevg
Speed is higher -> accuracy is lower.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3187
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #52 on: June 20, 2024, 10:07:40 pm »
Interesting fact:

For approximately 94% of the numbers being tested, the following relationship holds true:

best_sin(2*pi*x) = 2*pi*x

where 2*pi*x is calculated using single precision floats and best_sin is precise sin (calculated in doubles and then rounded to singles).

It is also interesting that, if you calculate the entire "best_sin(2*pi*x)" expression in doubles, the percentage of matches falls to 63%. This is because imprecise pi presentation and rounding errors after multiplication.


 

Offline dmsc

  • Newbie
  • Posts: 5
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #53 on: June 21, 2024, 03:27:18 pm »
Hi!

Better to present a given approximation is to learn how to generate new ones.

Here, I use Sollya ( https://www.sollya.org/ ) , an open source package to compute polynomial approximations to functions.

First, we start with a simple approximation: We want to approximate sin(2π x) in the interval [0, 0.25], this is the first quadrant as we can simply use symmetries for the other quadrants.

Code: [Select]
> prec = 500;
The precision has been set to 500 bits.
> P = fpminimax(sin(2*pi*x), [| x, x^3, x^5, x^7 |], [|SG...|], [2^-30; 0.25], absolute);
> P;
x * (6.28316402435302734375 + x^2 * (-41.337139129638671875 + x^2 * (81.34066009521484375 + x^2 * (-70.99242401123046875))))
>

The parameters are: the function to apporximate, the powers we want to use, the precission of the coefficients ("SINGLE"), the interval and the error to minimize. We specify the interval at a low value instead of 0, as sin(x) is 0 at 0, and also we only specify odd powers as our sin() function is odd. Also, we set the precision to a big value, to get good stability in the calculations.

To verify that the given approximation uses single floating point values, we can use printexpansion, and to get the error we use dirtyinfnorm:

Code: [Select]
> printexpansion(P);
x * (0x401921f5c0000000 + x^2 * (0xc044ab2760000000 + x^2 * (0x405455cd60000000 + x^2 * 0xc051bf83e0000000)))
> dirtyinfnorm(P-sin(2*pi*x),[0,0.25]);
5.901410675865754274125938111579946370898526416744156813463401263314036742159040177459193842802082182498861473608250266942315012450527420167572139088982e-7
>

The error is given in the full precision, but it is about 5.9E-7. This is with 4 coefficients. One of the problems of the approximation is that it does gives the exact value at 0, but not at 0.25:
Code: [Select]
> P(0);
0
> P(0.25);
0.9999994118697941303253173828125
>

This is expected in this type of approximations, as the solver is minimizing the maximum absolute error in the whole interval. We can do better by fixing the polynomial so that the value at 0.25 is always 1.
This is done by solving:  P(x) = A*X + B*X^3 + C*X^5 + D*X^7 at 0.25, we have 1 = A/4 + B/64 + C/1024 + D/16384, this means that D = 16384 - 4096 A - 256 B - 16 C.
Replacing, our new polynomial is: P(x) = A*(X - 4096 X^7) + B*(X^3 - 256 X^7) + C*(X^5 - 16 X^7) + 16384 X^7.

In Sollya:
Code: [Select]
> P = fpminimax(sin(2*pi*x), [| x-4096*x^7, x^3-256*x^7, x^5-16*x^7 |], [|SG...|], [2^-30; 0.25], absolute, 16384*x^7);
> P;
x * (6.283161163330078125 + x^2 * (-41.336696624755859375 + x^2 * (81.32404327392578125 + x^2 * (-70.8184814453125))))
> P(0);
0
> P(0.25);
1
> dirtyinfnorm(P-sin(2*pi*x),[0,0.25]);
6.8045399284315485022936000384196978305609872188241776832077688014331844645120006948491241788562445606690675631779229909577714291447121361540476950202e-7
> printexpansion(P);
x * (0x401921f500000000 + x^2 * (0xc044ab18e0000000 + x^2 * (0x405454bd20000000 + x^2 * 0xc051b46200000000)))
>

So, we now have exact approximations at 0 and 0.25, but the error is a little higher at 6.8E-7.

Another problem of this approximation is that, to calculate the quadrant, we probably need to multiply by 4 at the start - so it would be better to use an interval from 0 to 1 instead of 0 to 0.25, approximating sin(π/2 x). Now, our equations are simpler:

Code: [Select]
> P = fpminimax(sin(pi/2*x), [| x-x^7, x^3-x^7, x^5-x^7 |], [|SG...|], [2^-30; 1], absolute, x^7);
> P;
x * (1.57079029083251953125 + x^2 * (-0.645885884761810302734375 + x^2 * (7.9418011009693145751953125e-2 + x^2 * (-4.322417080402374267578125e-3))))
> P(0);
0
> P(1);
1
> dirtyinfnorm(P-sin(pi/2*x),[0,1]);
6.8045399284315485022936000384196978305609872188241776832077688014331844645120006948491241788562445606690675631779229909577714291447121361540476950202e-7
>

As expected, the error is the same as above, we simply scaled the coefficients. And we can now check how the error decreases by adding one more coefficient:

Code: [Select]
> P = fpminimax(sin(pi/2*x), [| x-x^9, x^3-x^9, x^5-x^9, x^7-x^9 |], [|SG...|], [2^-30; 1], absolute, x^9);
> P;
x * (1.5707962512969970703125 + x^2 * (-0.6459629535675048828125 + x^2 * (7.9687185585498809814453125e-2 + x^2 * (-4.67061810195446014404296875e-3 + x^2 * 1.5013478696346282958984375e-4))))
> printexpansion(P);
x * (0x3ff921fb40000000 + x^2 * (0xbfe4abba80000000 + x^2 * (0x3fb4666120000000 + x^2 * (0xbf73217f80000000 + x^2 * 0x3f23adb000000000))))
> dirtyinfnorm(P-sin(pi/2*x),[0,1]);
7.948025357666819715730261204406953573391265381670061492705068182157447496636645892899473552919933216095619523713857258147870996919642403290705587558827e-9
>

So, the error is about 85 times smaller with one more coefficient - and this is as good as possible using single precision, as 1/2^24 is 6E-8.

Have Fun!
 

Offline Tation

  • Regular Contributor
  • *
  • Posts: 66
  • Country: pt
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #54 on: June 23, 2024, 09:49:46 am »
Here, I use Sollya ( https://www.sollya.org/ ) , an open source package to compute polynomial approximations to functions.

First, we start with a simple approximation: We want to approximate sin(2π x) in the interval [0, 0.25], this is the first quadrant as we can simply use symmetries for the other quadrants.

Code: [Select]
> prec = 500;
The precision has been set to 500 bits.
> P = fpminimax(sin(2*pi*x), [| x, x^3, x^5, x^7 |], [|SG...|], [2^-30; 0.25], absolute);
> P;
x * (6.28316402435302734375 + x^2 * (-41.337139129638671875 + x^2 * (81.34066009521484375 + x^2 * (-70.99242401123046875))))
>

The parameters are: the function to apporximate, the powers we want to use, the precission of the coefficients ("SINGLE"), the interval and the error to minimize. We specify the interval at a low value instead of 0, as sin(x) is 0 at 0, and also we only specify odd powers as our sin() function is odd. Also, we set the precision to a big value, to get good stability in the calculations.

To verify that the given approximation uses single floating point values, we can use printexpansion, and to get the error we use dirtyinfnorm:

Code: [Select]
> printexpansion(P);
x * (0x401921f5c0000000 + x^2 * (0xc044ab2760000000 + x^2 * (0x405455cd60000000 + x^2 * 0xc051bf83e0000000)))
> dirtyinfnorm(P-sin(2*pi*x),[0,0.25]);
5.901410675865754274125938111579946370898526416744156813463401263314036742159040177459193842802082182498861473608250266942315012450527420167572139088982e-7
>

The error is given in the full precision, but it is about 5.9E-7. This is with 4 coefficients.

Why `absolute`? An absolute error of ±5.9e-7 may be acceptable if the sine we're looking for is much greater than that, but if the argument is small, say 1-e5, computing its sine with error ±5.9e-7 is not that good. I (almost) always look for minimax relative error.

I tested Sollya time ago. It was interesting since it supossedly uses a different theoretical approach that takes into account the finite precission used to evaluate the resulting polynomial, thus producing the absolute optimal polynomial using, say, Binary32. Unfortunately I was not able (maybe my fault) to get better results with it than with bare python versions of the Remez algorithm and (in my case) the polynomials I got with Sollya were definitely not optimal.

« Last Edit: June 23, 2024, 12:22:03 pm by Tation »
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3187
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #55 on: June 23, 2024, 01:53:41 pm »
Why `absolute`? An absolute error of ±5.9e-7 may be acceptable if the sine we're looking for is much greater than that, but if the argument is small, say 1-e5, computing its sine with error ±5.9e-7 is not that good. I (almost) always look for minimax relative error.

There's no reason to worry about such small values. You can add "if (x < min_value) return x" in front of any sine algorithm and it will eliminate the ULP errors for values below min_value. For single precision floats, the min_value is greater than 1e-5 (approximately 0.00044 for sin(x) or 0.00007 for sin(2*pi*x)). For all the small numbers (which is over 90% of all numbers in the test), the only source of errors is imprecision in pi (its representation in floats is 28 ppb bigger than the real value) and multiplication errors. If we used sin(x) instead of sin(2*pi*x), there would be zero ULP error in this area. The multiplication by 2*pi is the sole source of ULP errors for small values.
 

Offline dmsc

  • Newbie
  • Posts: 5
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #56 on: June 23, 2024, 06:32:23 pm »
Hi!
Here, I use Sollya ( https://www.sollya.org/ ) , an open source package to compute polynomial approximations to functions.

First, we start with a simple approximation: We want to approximate sin(2π x) in the interval [0, 0.25], this is the first quadrant as we can simply use symmetries for the other quadrants.

Code: [Select]
> prec = 500;
The precision has been set to 500 bits.
> P = fpminimax(sin(2*pi*x), [| x, x^3, x^5, x^7 |], [|SG...|], [2^-30; 0.25], absolute);
> P;
x * (6.28316402435302734375 + x^2 * (-41.337139129638671875 + x^2 * (81.34066009521484375 + x^2 * (-70.99242401123046875))))
>

The parameters are: the function to apporximate, the powers we want to use, the precission of the coefficients ("SINGLE"), the interval and the error to minimize. We specify the interval at a low value instead of 0, as sin(x) is 0 at 0, and also we only specify odd powers as our sin() function is odd. Also, we set the precision to a big value, to get good stability in the calculations.

To verify that the given approximation uses single floating point values, we can use printexpansion, and to get the error we use dirtyinfnorm:

Code: [Select]
> printexpansion(P);
x * (0x401921f5c0000000 + x^2 * (0xc044ab2760000000 + x^2 * (0x405455cd60000000 + x^2 * 0xc051bf83e0000000)))
> dirtyinfnorm(P-sin(2*pi*x),[0,0.25]);
5.901410675865754274125938111579946370898526416744156813463401263314036742159040177459193842802082182498861473608250266942315012450527420167572139088982e-7
>

The error is given in the full precision, but it is about 5.9E-7. This is with 4 coefficients.

Why `absolute`? An absolute error of ±5.9e-7 may be acceptable if the sine we're looking for is much greater than that, but if the argument is small, say 1-e5, computing its sine with error ±5.9e-7 is not that good. I (almost) always look for minimax relative error.

This is why the polynomial guarantees that P(0)=0, this ensures that the error goes to 0 indeed. And this zero in the sin(x) in the interval to approximate is the reason for minimizing using the absolute error, as the division by 0 causes instabilities.

Quote
I tested Sollya time ago. It was interesting since it supossedly uses a different theoretical approach that takes into account the finite precission used to evaluate the resulting polynomial, thus producing the absolute optimal polynomial using, say, Binary32. Unfortunately I was not able (maybe my fault) to get better results with it than with bare python versions of the Remez algorithm and (in my case) the polynomials I got with Sollya were definitely not optimal.

What I find great in Sollya is that you can give your own polynomial bases and a constrained part, this gives much more flexibility in searching your solution.

Have Fun!
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6514
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #57 on: June 23, 2024, 06:38:48 pm »
For float x, sin(x) == x for x = -0.00044363294728100299835205078125 .. +0.00044363294728100299835205078125, inclusive.  Because
$$\frac{d \, \sin(C x)}{d \, x} = C \cos(C x) \quad \text{ and } \quad \frac{d \, \cos(C x)}{d \, x} = -C \sin(C x)$$for other periods the equation is sin(C·x) == C·x.  Because of rounding, this is not generally true for x close to zero, unless C is a power of two.

If we accept either adjacent float value as correct (1 ULP error), then
  • C = 2*Pi: x = -0.0000733965352992527186870574951171875 .. +0.0000733965352992527186870574951171875.
  • C = Pi: x = -0.000146793070598505437374114990234375 .. +0.000146793070598505437374114990234375.
  • C = Pi/2: x = -0.00029358614119701087474822998046875 .. +0.00029358614119701087474822998046875.
where the magnitudes of the range endpoints are exact; 10087543·2-37, 10087543·2-36, and 10087543·2-35, respectively.
« Last Edit: June 23, 2024, 06:40:21 pm by Nominal Animal »
 

Offline bson

  • Supporter
  • ****
  • Posts: 2309
  • Country: us
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #58 on: June 24, 2024, 07:32:53 pm »
How do you manage to test all floating values between 0 and 1?

In C/C++, you can use nextafter() or nextafterf() to iterate over representable floating point numbers.
https://en.cppreference.com/w/c/numeric/math/nextafter
Or for small numbers with a 0 exponent, directly use FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON.
Found in float.h
« Last Edit: June 24, 2024, 07:34:55 pm by bson »
 

Offline nimish

  • Regular Contributor
  • *
  • Posts: 174
  • Country: us
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #59 on: June 25, 2024, 01:22:35 am »
How do you manage to test all floating values between 0 and 1?

In C/C++, you can use nextafter() or nextafterf() to iterate over representable floating point numbers.
https://en.cppreference.com/w/c/numeric/math/nextafter
Or for small numbers with a 0 exponent, directly use FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON.
Found in float.h

Why bother? Just iterate through all possible 32-bit integers and interpret them as floats.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6514
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #60 on: June 25, 2024, 05:37:45 am »
0x00000000 = 0.0f, and
0x00800000 .. 0x3F800000 = all normal (non-subnormal) positive floats greater than zero, up to and including 1.0f.

As mentioned several times in this thread, in the 0.0f .. 1.0f range (excluding subnormals), there are only 1,056,964,610 you need to test.  This is exactly what the test bench I posted does.
 
The following users thanked this post: SiliconWizard

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3187
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #61 on: June 25, 2024, 03:40:52 pm »
As mentioned several times in this thread, in the 0.0f .. 1.0f range (excluding subnormals), there are only 1,056,964,610 you need to test.  This is exactly what the test bench I posted does.

Yeap, only one billion of floats, 90% of which are uselessly small. For comparison, if the same 32 bits of memory are used as unsigned integers, you get 4 billion numbers evenly spaced in the interval, 100+ times better absolute precision, and less computational resources.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14806
  • Country: fr
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #62 on: June 25, 2024, 10:31:14 pm »
Use the best tool for the job - conventional FP is not always the answer.
In some applications, fixed-point will give you better overall precision.
For a "better" use of bits, while keeping the benefits of FP, one could consider Posits, although I personally don't know of any *commercial* CPU that implement them on a hardware level as of yet. You can always use software implementations (very slow).
 

Offline radiolistener

  • Super Contributor
  • ***
  • Posts: 3553
  • Country: ua
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #63 on: June 26, 2024, 01:28:47 am »
yes, fixed point with the same bit size has better precision and better resolution for samples

float has unused and duplicated codes
« Last Edit: June 26, 2024, 01:33:33 am by radiolistener »
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6514
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #64 on: June 26, 2024, 12:22:24 pm »
IEEE 754 Binary32 float storage representations:

0x00000000 = +0.0f
0x00000000 .. 0x007FFFFF = positive subnormals, 2⁻¹⁴⁹ .. 8388607×2⁻¹⁴⁹, inclusive; < 2⁻¹²⁶
0x00800000 = 2⁻¹²⁶, smallest positive normal
:
0x3F800000 = +1.0f
0x4B800000 = +16777216.0f, largest integer in the consecutive range
0x4B800001 = +16777218.0f
0x7F7FFFFF = 16777215×2¹⁰⁴ < 2¹²⁸, largest positive value representable
0x7F80000 = +Infinity
0x7F80001 .. 0x7FFFFFFF = +NaN

0x80000000 = -0.0f
0x80000000 .. 0x807FFFFF = negative subnormals, -2⁻¹⁴⁹ .. -8388607×2⁻¹⁴⁹, inclusive; > -2⁻¹²⁶
0x80800000 = -2⁻¹²⁶, largest negative normal
:
0xBF800000 = -1.0f
0xCB800000 = -16777216.0f, smallest integer in the consecutive range
0xFF7FFFFF = -16777215×2¹⁰⁴ > -2¹²⁸, smallest positive value representable
0xFF800000 = -Infinity
0xFF800001 .. 0xFFFFFFFF = -NaN

Zero is the only repeated numeric value, and only because +0.0f and -0.0f have their own separate representations.

There is exactly one positive infinity and one negative infinity, but 8388607 positive NaNs and 8388607 negative NaNs, and thus 16777214 NaN representations.  Because any arithmetic operation with a NaN yields NaN (but retains the sign of the operation), you have millions of non-numeric "marker" values you can use.  For example, a stack-based single-precision floating-point calculator might use the NaNs to describe the operators (say positive NaNs), and functions (say negative NaNs), and thus only need a single stack of float values.

On architectures like x86, x86-64, any ARM with hardware floating-point support, any Linux architecture with hardware or software floating-point support, you can use the float representation examination tool I posted in reply #37 to explore and check these; I use it constantly when working with floats.

If you want to add a check for architecture support to it, I'd add
    if (sizeof (float) != sizeof (uint32_t) ||
        ((word32){ .u = 0x00000000 }).f != ((word32){ .u = 0x80000000 }).f ||
        ((word32){ .u = 0xC0490FDB }).f != 3.1415927410125732421875f ||
        ((word32){ .u = 0x4B3C614E }).f != 12345678.0f) {
        fprintf(stderr, "Unsupported 'float' type: not IEEE 754 Binary32.\n");
        exit(EXIT_FAILURE);
    }
to the beginning of main().  I do not normally bother, because I haven't had access to a system where that would trigger, in decades.  That includes all my SBCs and microcontrollers.  (Notably, I do not have any DSPs, which are the exception, and actually could still use a non-IEEE 754 Binary32 float type.)

The function in reply #50 yields the difference between any two non-NaN float values, in number of unique non-NaN float representations between the two.  If they are the same value, it returns 0; if they are consecutive representable numeric float values, it returns 1; if there is one representable numeric float between the two values, it returns 2, and so on.  For example, for the difference between the smallest positive subnormal and the largest negative subnormal (storage representations 0x00000001 and 0x80000001) it returns 3.

The key idea in that difference is that negative float values representation is subtracted from 0x80000000, which turns their storage representation to the integer negative of the storage representation of the corresponding positive float value.  Note that this also maps -0.0f to the same integer representation as +0.0f does, 0x00000000.  The difference of such modified signed integer storage representations is the difference in number of representable float values as described in the previous paragraph.

For radix sorting, the storage representation is kept as a 32-bit unsigned integer.  For representations that have the sign bit clear, you set the most significant bit.  For all other representations, you invert all bits (~).  This keeps the representations of +0.0f and -0.0f separate (0x80000000 and 0x7FFFFFFF, respectively), puts the positive floats representation above the negative ones, and inverts the order of the representations of the negative ones, essentially ensuring that all non-NaN float values order the same way as their unsigned 32-bit modified storage representations.
After radix sorting, the inverse operation needs to invert all bits (~) if the most significant bit is clear, and only clear the most significant bit if it is set.  This undoes the storage representation modification, restoring the exact original float representations.

I, too, use fixed point types quite often.  However, this thread is about the specific case when you have hardware float support, and want to leverage that instead.  So, commenting that one should use fixed-point here instead of hardfloat is an apples-to-oranges replacement suggestion.
 

Offline nimish

  • Regular Contributor
  • *
  • Posts: 174
  • Country: us
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #65 on: June 26, 2024, 08:10:57 pm »
0x00000000 = 0.0f, and
0x00800000 .. 0x3F800000 = all normal (non-subnormal) positive floats greater than zero, up to and including 1.0f.

As mentioned several times in this thread, in the 0.0f .. 1.0f range (excluding subnormals), there are only 1,056,964,610 you need to test.  This is exactly what the test bench I posted does.

The difference between 1B and 4B is meaningless plus it's useful to see what happens with various NaNs/special bit patterns.

Anyway it's a moot point.

Also "uselessly small" is use-case dependent.
 

Offline bson

  • Supporter
  • ****
  • Posts: 2309
  • Country: us
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #66 on: June 27, 2024, 10:05:23 pm »
Fixed-point obviously has better precision, but it has terrible range and doesn't do proper arithmetic rounding.
Fixed-point is also always available, and I don't particularly object to using it in an interrupt handler - whereas saving and restoring FP state on interrupt makes me cringe (even if it's irrational).
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6514
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #67 on: June 28, 2024, 05:08:20 pm »
proper arithmetic rounding
Because fixed-point addition and subtraction are their integer equivalents, rounding is only applied for multiplication and division (and conversion to and from other formats like strings).

For rounding halfway away from zero –– which I personally prefer –– only requires the most significant dropped bit (of the absolute value), and is added to the absolute value of the product.  This breaks ties always away from zero.

Always truncating operations, rounding all operations towards zero, is sometimes preferred, however.  For addition, subtraction, multiplication, and square root, you know your result is either the closest representable value to the exact result, or one less than that.  If you have a way to check, you can always produce the exact result to within the precision available!  It is particularly useful for exact square and cubic roots.  When the operations use any other rounding method, you have at minimum two other possibilities for the exact correct result, so checking becomes at least twice as expensive.

On hardware that supports it, using C99 fesetround(FE_TOWARDZERO) via <fenv.h> can be very, very useful because of that.  Do not ignore the value of always rounding results towards zero!
 
The following users thanked this post: SiliconWizard

Offline Tation

  • Regular Contributor
  • *
  • Posts: 66
  • Country: pt
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #68 on: June 29, 2024, 05:20:32 pm »
May be of interest to somebody: https://galileo.phys.virginia.edu/classes/551.jvn.fall01/goldberg.pdf

On page 8 justifies ULP as a measure of error and provides a formula to compute it. The formula can give error values as non-integer ULPs.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6514
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #69 on: June 29, 2024, 06:21:30 pm »
It also explains why the preferred method of breaking ties when rounding is to round exact halfway to even: it stops the slow drift in repeated operations that is possible if rounding halfway up.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf