Hi there,
In one of my fits of insanity, I decided it'd be fun to try and build a laser power meter. What I have is a thin chunk of brass, painted black on one side, with a 100R resistor and some form of temperature sensor with sub-degree-C resolution. A second sensor is held at room temperature and used to eliminate ambient temperature from the equation (zero the measurement). An instrumentation amplifier, precision current source (the good old REF200) and "balance" pot (used to compensate for Vf/Vbe difference between the two sensors by adjusting the forward current) completes the circuit.
The theory is, the black paint absorbs the energy from the laser beam and its temperature rises as a result. Measuring the increase in temperature allows you to work backwards to calculate optical power. The 100R resistor can have a known power (say, 10mW DC) fed through it to "calibrate" the sensor head.
I've noticed that there are two ways to implement a "silicon bandgap" temperature sensor -- a silicon diode (e.g. 1N4148) or a diode-wired bipolar transistor (e.g. 2N3904, BC547). The BJT seems to be more prevalent. What I'm struggling to find is any rationale behind this; a lot of silicon temperature sensor chipmakers say "always use a BJT", but give no reason for the statement.
Does anyone know why you'd use a diode-connected BJT? Are they more accurate or repeatable than silicon diode sensors?
Cheers,
Phil.