Which, FWIW, are equivalent to fractions of:
3253427/432964489
2779529/16777216
8386063/8388608
Actually, the smallest term is 8068423/1073741824.
Put another way, they are 8068423*2**-30, 2779529*2**-24, and 8386063*2**-23.
If one uses truncating (and not rounding) math, I fully believe that the exact best solution is very near the Remez coefficients, but just some ULPs away, due to the quantization to single-precision floating-point and rounding differences.
If anyone is wondering:
When approximating logarithms, we are relying on the key observation: log
b(
a b) = log
b(
a) + log
b(
b).
Although the basis b=2 makes most sense when dealing with binary-radix floating point numbers, it does expand to any b via log
a(
x) = log
b(
x)/log
b(a). And since 1/log
2(e) ≃ 1.4426950408889634073599246810 and 1/log
2(10) ≃ 3.32192809488736234787031942948939, using any other basis is just one multiplication by a constant away, so we don't even need to worry about natural or base-10 logarithms; we get them for basically free if we handle base-2.
Because log
2(2) = 1 and log
2(1/2) = -1, we can multiply the argument
x by any integer power of two 2
y, and use log
2(x) = log
2(
x y) +
y. This is why we only really need a to handle a small range of arguments.
The reason 0.7071 ≃ sqrt(1/2) ≤
x ≤ sqrt(2) ≃ 1.4142 is chosen as the range, is that 2*sqrt(1/2) = sqrt(2) (and 1/sqrt(2) = sqrt(1/2), and 1/sqrt(1/2) = sqrt(2)). That is, if you have a positive real number
r, you can always multiply it with an integer power of two, 2
y where
y is an integer, to get sqrt(1/2) ≤
r*2
y ≤ sqrt(2). Of course you can pick an even larger range to approximate –– just like you don't need to restrict to |x|≤PI/2 for approximating sine –– but the approximation will be more complex than it has to be.
What I am wondering myself, is whether absolute or relative error is the "better" error metric here. log
2(sqrt(1/2)) = -1/2, and log
2(sqrt(2)) = +1/2, with log
2(1) = 0. Also, log
2(
x) = -log
2(1/
x), so we definitely have symmetry here.