Hi all,
Thanks for the constructive suggestions.
There isn't nearly enough detail about the requirements here to make any useful suggestions. And Jeremy doesn't appear to understand what "chopping" means (i.e. that it does NOT affect the actual signal being measured, etc.) And offers other seemingly arbitrary limitations without explaining WHY he thinks they are limits.
Richard, what information would you like? I am not sure what you mean by arbitrary limits. I have calculated the voltage magnitude of the information and the frequency range, now I want to try to measure it in real life?
I am very much aware of what a chopper does, and it will not help as outlined in ejeffrey's post. However, we are talking about optical chopping, do you mean chopping as in a chopper amplifier? Optical chopping definitely affects the signal, that is the point of using it?
The reason I said "no chopping" is because that is usually the first thing that people suggest in optical instrumentation problems (myself included).
You can servo it with a very low filter frequency just like they do with hifi amps. You use an integrator with any time constant you want to drive the signal back to zero DC.
My 1A7A Tek plug-in (ancient) has the manual offset control with coarse and fine adjustments. Works well, but a bit of a PITA compared to a filter solution.
I thought he wanted to get away from having a low frequency cutoff. I have done the same thing in correcting baseline DC drift automatically using an integrator or even sampled system.
This is exactly what I am trying to do.
I am interested in looking into this servoed system, but also running the output through a sample and hold rather than continuously subtracting the filter output. Thanks for the idea.
I am a dubious about the original poster's noise and drift requirements because amplifying a 100 microvolt signal to 10 volts for an 18 bit ADC yields a least significant bit of 0.381 nanovolts which is close to state of the art. The best integrated operational amplifiers have drift and noise which is a two or three orders of magnitude higher than that.
Yes, you are right to see them as dubious. I am only using the 18bit ADC because that is what I have available. I also have other techniques to reduce the noise (averaging, non-linear filtering). I do not actually expect to get less than nano volt resolution though. Even having one shot resolution of 1uV would be awesome as far as I am concerned!
--
I feel that this problem doesn't really need much of a context? But I am willing to learn if someone can point out the problems. I basically just have a very small signal with a big signal added. The big signal is dc, the little signal has some time varying information. I want the offset to go away without just highpassing but it is hard because I need the offset subtraction to be really precise and changeable between experiments. Even though I am measuring lasers, this would be the same if I was, for instance, measuring the magnetic field inside a coil, measuring conductivity of a solution undergoing a multistage reaction, measuring neuron impulses, etc. I completely understand that one can mathematically subtract an offset, but I was looking for perhaps someone with experience with doing this in the real world to give me some pointers on how to actually pull this off as I do not have much experience with very low drift/offset amplifiers.
And just to be super clear, there are no extra detectors or anything. I am just measuring the terminal voltage of the semiconductor laser.
Thanks