Now that I once again own a Seek Thermal product, I am taking more of an interest in the Seek cores that are currently shipping and thought I would start a discussion on measurement accuracy.
First some background on microbolometer temperature measurement for those readers unaware of what is involved.
A microbolometer is really just an array of temperature sensitive resistors. These may all have a slightly different response to the same amount of thermal energy and the response can differ at different temperatures that fall within the operational capabilities of the microbolometer. To deal with non uniformity in the microbolometer array, a NUC test and compensation table is created. This table takes account of the different pixel thermal responses within the array. NUC tables can be created for a single temperature range or for multiple ranges, depending upon the camera design. The result of applying an NUC table to the microbolometer array is a nice flat image with no visible Delta T indication across its surface when viewing a thermally flat scene, dead pixels are also dealt with during the NUC table creation but that is not relevant here.
So now we have a microbolometer that the camera system ‘knows’ and can correct any pixel response offsets that exist. There is a further correction applied to the output of the microbolometer in the form of Flat Field Correction. This may be considered a ‘touch-up’ or fine tuning function that helps to maintain the cameras flat field response during use when pixels begin to drift. Have I mentioned how badly behaved the little microbolometer pixels can be ? no ? Well they can be
So what can disrupt the harmony of a well corrected and ‘levelled’ microbolometer imaging array whilst in use ?
Well the pixels will naturally drift by a small amount when operating and there can be minor self heating due to the pulsed read current applied to them (this should be minimal however). The other issue is die temperature. If the microbolometer die is not temperature stabilised it operates in a thermal equilibrium mode, the temperature of which is dictated by the ambient temperature around it, self heating and the thermal energy entering the lens system that illuminates it. It is a fact of life that as the die temperature drifts, the pixels can drift at different rates or to different extents. The NUC was carried out at a set ambient temperature, it is not normally Dynamic in its response to ambient temperature change. For this reason the FFC is used to flatten the arrays output at regular intervals to keep everything pretty at the camera output. The temperature of the FFC flag is also known by the camera so a temperature calibration can also be carried out to correctly compensate for changes in ambient temperature. The microbolometer array also has temperature sensors on it so the system can monitor changes there as well. There are modern thermal cameras that do not use a mechanical FFC correction process but they are not really relevant to this discussion as the Seek cores use a mechanical FFC flag.
So in précis, the NUC calibration captures non uniformity in the pixels at a certain die temperature. The regular FFC event keeps the pixels response relatively ‘flat’ as the die temperature changes or pixels naturally drift. The ‘knowns’ with regard to temperature are that of the air around the core, the microbolometer die, the FFC flag and the lens system. In some simple systems, a temperature sensor on the cameras chassis is used as the reference for ambient, FFC flag and lens temperature. This can mean that the FFC flag temperature is, in reality slightly different due to its physical location within the camera, internal convection air currents etc.
OK, so now we have got some of the calibration and compensation theory out of the way, what about temperature measurement ?
As you likely know, a microbolometer pixel may be used as a relatively accurate temperature measurement sensor. To be such a radiometric measurement sensor it needs to be characterised so that it’s response to thermal energy may be known in order to calculate radiance and then temperature.
It is important to characterise the pixel response to thermal energy. This is normally done at a set ambient temperature for a stable die temperature running in thermal equilibrium or temperature controlled.
The pixel under test is pulse biased to avoid self heating and its output monitored as it is sequentially exposed to known thermal energy in small increments. An Energy in Vs signal Output response plot will result for that pixel, but that pixel alone ! In an FPA there are a lot of pixels ! The output from each pixel will need to be plotted at each Energy point and a thermal response ‘map’ created. If it is found that the Energy response plot is pretty much the same across all pixels in the array, a generic response plot may be employed to production cores. Note that I am deliberately ignoring the blind pixels that also form part of a microbolometers measurement system.
Calibration is expensive and can be time consuming. For this reason it is not uncommon for a camera to undergo a single point or dual point calibration. As stated above, if a particular microbolometer FPA series exhibits a relatively even response to varying energy levels across its pixels, an energy vs output plot can be used to predict the output from the pixels when measuring the energy in a scene presented to it. This plot needs references however. Without such it is just a floating response plot. The temperature calibration process provides one or two reference points for application of the response plot. A two point calibration is preferred. This may be an ambient and an ambient plus 30C BlackBody calibration that ‘anchors’ the response plot to known thermal stimulus at a known ambient and die temperature. In some cases the calibration points can be much further apart, at say +10C and close to the maximum temperature capability of the cameras current range. If more than one temperature range is available on the camera, the calibration will need to be repeated as there are changes to the Microbolometer biases. After a 2 point calibration with known die temperature, the cameras radiometric measurement system can measure the radiance at any point in its range coverage and convert this to whatever thermal units are desired. There will, of course, be the potential for error when relying upon a generic calibration plot. To improve accuracy more calibration points can be used to fine tune the energy vs pixel output table for a specific camera system. More calibration points improve accuracy but are expensive in terms of time.
You will recall that I mentioned that the temperature of the cameras interior, FPA die, FFC shutter and lens is often monitored by the camera system. This is to improve measurement accuracy as all of these temperatures can impact upon the measurement. The Ambient temperature can effect the core wholistically as thermal sensitivities can exist in many components that support the FPA and measurement system. The die temperature is an essential data point for the measurement process. The FFC flag ‘blinds’ the microbolometer to provide a ‘flat field’ reference scene against which to correct pixel outputs. That flag will be at a certain temperature and if this is measured, the value can be used to provide fine calibration to the measurement system. Lens temperature can effect measurement accuracy so is also considered in many radiometric camera systems. For a camera that does not use a temperature stabilised microbolometer it is desirable to place the whole camera in an environmental chamber and capture the error introduced to measurements by changes in Ambient temperature. The results can be captured by the measurement system in the form of an ambient temperature offset table.
What else effects the radiometric measurement accuracy ?
Well let us not forget emissivity, distance to target (atmospheric influences) and lens characteristics. These can be accounted for in many radiometric camera systems.
The end result of all this is a temperature measurement accuracy of +/-2C or 2%, whichever is greater, for many radiometric cameras. We must remember that this is a ‘best case’ scenario tolerance ! Errors can creep into a measurement to create increased tolerance. For this reason it is always wise to use another, more accurate, measurement technology when great accuracy is required. A simple example would be not knowing the actual emissivity of a surface and choosing a figure that ‘looks right’ . Sadly visual inspection of a surface is not normally a reliable measure of actual emissivity. To create a known emissivity it can be necessary to apply a material of known emissivity specification to the target. Such is not always possible however. The use of an accurate contact temperature sensor can provide another measurement data point for the thermographer.
So to the topic of this Post. Where does the Seek Thermal range of cores come into this discussion ?
Well Seek Thermal provide a radiometric measurement accuracy specification for the PRO core of +/-5C or 5%, whichever is greater. This does, at first glance, look high for a modern thermal camera core. The question is..... why is the Seek Thermal core unable to provide the usual +/-2C or 2% accuracy specification commonly seen in other cameras specifications ?
Discuss ?
Fraser