Here is a simple algorithm that will generate a sine-wave with 12.433075... samples per cycle. The amplitude of the sine-wave will vary a little over time, but will never differ by more than 1ppm from its initial value, no matter how long it runs. So more than good enough to test a 14 bit DAC, and exercise almost every code.
x = 2079038281;
a = 0;
for (;;) {
x += (a >> 1);
a -= (x >> 1);
}
'x' and 'a' are 32 bit signed integers. The DAC samples are the upper 14 bits of either 'x' or 'a' and have an amplitude of 8191. To avoid a 1/2 LSB DC offset in the output, add 0x20000 to the value in 'x' or 'a' before extracting the upper 14 bits.
this is a real magic
How it works?
This is an algorithm that I discovered while playing around with numbers back in the 1980s. I was writing DSP code for voice-band modems at the time, on quite limited hardware, so tricks like this looked interesting.
There is nothing special about the initial value for x. It just sets the amplitude, although some choices can result in a little more variation in the amplitude with time than others. There is also nothing special about the right shift. It is just a 32 bit multiply by 0.5.
An example of the more general algorithm that can generate any arbitrary frequency with no significant long term amplitude drift is:
p = 2 * sin(pi / n)
x = 1;
a = -p / 2;
for (;;) {
x += a * p;
a -= x * p;
}
Here x has an amplitude of 1, and n is the number of samples per cycle of the generated sine-wave. The accumulated amplitude error over time is typically less than 1:10
12 after 10
12 samples, with all variables as IEEE 80-bit extended precision floating point.
The algorithm actually works with any fixed point or floating point number system.
Note that x and a are not orthogonal if regarded as simultaneous samples. However, they are orthogonal as time interleaved samples, where the in-phase and quadrature DACs are alternately updated.
The key to the amplitude stability of this algorithm is that the frequency is set by a single constant, so the fact that this constant is of finite precision has no impact. The only errors are those due to finite arithmetic, and those errors have no intrinsic bias to either increase or decrease the amplitude. The accumulated amplitude error has the appearance of a random walk, so the area of the annulus containing the orthogonalised equivalent to x and a (let's call these x and y) only increases with the square root of the number of steps. Therefore over time, the probability of the algorithm landing on an x, y pair that it has visited before increases. Once this happens there will obviously be no further increase in the peak to peak variation of the amplitude.
Here is the maths (If I haven't fouled up the MathJax).
Assuming an amplitude of 1, a starting phase of zero and an angular increment of \$\theta\$, at iteration n we have:
$$
x_n=sin(n \theta) \\
y_n=cos(n \theta - \theta / 2)
$$
Using the standard expansion of \$cos(A+B)\$, \$y_n\$ can be written as
$$
y_n=cos(n \theta)cos(\theta/2 ) + sin(n \theta)sin(\theta/2)
$$
Similarly, \$y_{n+1}\$ can be written as:
$$
y_{n+1}=cos(n \theta + \theta / 2) \\
y_{n+1}=cos(n \theta)cos(\theta/2 ) - sin(n \theta)sin(\theta/2)
$$
So \$y_{n+1}\$ can therefore be calculated from \$y_n\$, \$x_n\$ and the constant \$2sin(\theta/2) \$:
$$
y_{n+1}=y_n - 2 sin(\theta/2)x_n
$$
Using the standard expansion of \$sin(A+B) \$, \$x_n \$ can be written as:
$$
x_n=sin(n \theta)=sin(n \theta + \theta / 2 - \theta / 2) \\
x_n=sin(n \theta + \theta / 2)cos(\theta / 2) - cos(n \theta + \theta / 2)sin(\theta / 2)
$$
Similarly, \$x_{n+1} \$ can be written as:
$$
x_{n+1}=sin(n \theta + \theta / 2 + \theta / 2) \\
x_{n+1}=sin(n \theta + \theta / 2)cos(\theta / 2) + cos(n \theta + \theta / 2)sin(\theta / 2)
$$
So \$x_{n+1} \$ can therefore be calculated from \$x_n \$, \$y_{n+1} \$ and the constant \$ 2sin(\theta/2) \$:
$$
x_{n+1}=x_n + 2sin(\theta/2)y_{n+1}
$$