If one gets good time domain data, one can do essentially the same analysis numerically: start with a Hilbert transformation of some kind to get phase data. This can include a mixing step (I/Q like) to also go to a lower frequency domain - this is the kind of easy way to do the Hilbert transformation. So one will get phase and amplitude data an a somewhat slower time scale, which is usually sufficient and a nice reduction in data rate, without loosing significant information.
A simple way to obtain fractional frequency data is to count the number of zero-crossings in an averaging interval measured by a reference clock. This is one technique used early in measuring oscillator stability and it is the first thing I plan to do. However, thinking about this last night has raised some issues that I think may prevent the use of this data to determine phase noise. Since some reading this thread are not familiar with stochastic processes, I am going to do this slowly with some tutorial information. Those familiar with stochastic process can skip over the next 5 paragraphs.
Restating the basic equation for a real oscillator:
v(t) = [V
0 + e(t)] * cos[w
0*t + phi(t)]
Both e(t) and phi(t) are stochastic processes (
see this post). Assume for the moment that these are stationary processes (if not, things get worse) and concentrate on phi(t). The mathematical framework for stochastic processes presumes that for any constant value of t, say t=t
i, phi() is a random variable. Its pdf has moments, such as its mean and variance. In order to estimate these (we are speaking about phi(t=t
i) here), you would have to sample the random variable several times and compute the estimate according to its defintion. For example, to compute the mean of phi(t=t
i), you would sample it N times, summing the values and dividing by N.
But, that isn't possibile in reality. You can only sample this random variable once, since it only is accessible at t=t
i. To sample it more than once, you would have to travel to N parallel universes in which the random variable exists and obtain N samples that way (something that is obviously impossible according to currently validated physical theories). The standard terminology for the moments of phi(t=t
i) is its ensemble moments (e.g., ensemble mean, ensemble variance).
Instead of sampling the same random variable over and over again, you can sample phi(t) at different times, e.g., sample phi(t=t
i), then phi(t=t
i+1), .... If you obtain N such samples you can treat them as if they came from the same random variable and compute moments. So, you can compute the mean by adding the values returned and dividing by N. These moments are called time moments (e.g., time mean, time variance).
But, what can you do with these moments? It turns out if the stochastic process is ergodic, then the time moments equal the ensemble moments. All ergodic processes are stationary, but not all stationary processes are ergodic. So, in order to use this technique to estimate ensemble moments, you have to show the stochastic process is ergodic.
OK, enough tutorial material. The first question is: do practical oscillators represent erodic processes? I have seen it stated in several places that their associated processes are stationary, but I have not seen anywhere that they are ergodic.
The second question centers on the relationship between instantaneous frequency and instantaneous phase, f(t)=d/dt[phi(t)]. What does this mean when phi(t=t
i) is a random variable? Not clear. In order to compute the derivative, you have to take the limit as h->0 of [phi(t+h)-phi(t)]/h. But this usually presumes phi(t) is a continuous function in the vincinity of t. Random variables are not functions in this sense. They return different values each time they are "accessed", so I don't know how to compute this derivative.
The third question relates to the averaging of
(corrected 6-16-18) instantaneous frequency during the zero-crossing counting process. The result is an average of the instantaneous frequency over the averaging interval. I don't know how to use this to get the average phase angle during the same interval. Someone more knowledgable
(corrected 6-16-18) than I will have to provide an argument (either for or against) that the derivative of a random variable average equals the average of the random variable derivative (what ever that means). Specifically, (using <> to indicate the mean of a random variable): d/dt[<phi(t)>] = <d/dt[phi(t)]> = <f(t)>.
The fourth question is, even if the last equation is true, how is phi(t) obtained? Normally, you would integrate d/dt[<phi(t)>] to get <phi(t)>. But, d/dt[<phi(t)>] = <f(t)> is a single number, not a function or even a set of values that you can sum to derive an estimate of the integral.
In summary, it seems to me you can't derive the average phase angle over a sample interval when you have measured the average frequency over that same interval. However, I would be happy to be disabused of this opinion.