we can perceive extremely short pulses of light
Have you noticed how much more sensitive is the peripheral vision in regards to faint-light blinking or movement?
Yes, this is well known phenomenon. (I can see the flicker in most fluorescent lights here (50 Hz mains, so 100 Hz flicker) in my own peripheral vision, and it annoys the heck out of me.)
The entire vision setup in humans is weird. First, we have four types of receptor cells in our retina, one dedicated for low-light conditions ("scotopic", most sensitive in the blue-green region, around 555 nm), and the three for color perception. Second, the retina is not uniform: there is a rather small spot of dense receptors (fovea), with fewer receptors outside that spot. The brain automagically keeps track of objects and movements outside the sharp vision cone, so that we think we
perceive everything in sharp focus. Third, the eyeball is not static nor does it always move smoothly: they "twitch" (saccades, or saccadic movement) to compensate for the slowness of the visual processing in the human brain. (Human eyes are only attached to six muscles each, plus the nerve bundle to the brain.) Fourth, while those color receptors are for red, green, and blue, our brains actually processes them somewhat as "red-green" (blue-green to magenta) and "blue-yellow" (yellow-green to violet). The color receptors also react faster than the low-light receptors.
So, there are at least three completely different facets to human vision: physical, due to properties of the eyeball and cells involved; visual, due to the processing chain in the brain; and perceptive, due to the 'interpretation' of the former two in the brain.
It is notable that many devices have switched from blinking leds to "breathing" leds, with roughly sinusoidal intensity curves, exactly because the abrupt on-off-on transitions are so taxing/annoying to us humans.
This does tie into this thread in that it explains why some humans are more susceptible to being annoyed by certain visual effects, but more importantly, why one might choose a different backlighting method depending on the
purpose of the device.
In my case, this display is intended to convey state information (about current wired and wireless networks) to non-technical users in a relaxed, usually not very brightly lit common room or media room or such; definitely not an office.
If I were to use PWM, I'd need it to be above 1 kHz or so, just to be sure I avoid perceptible flicker. However, I also hate audible coil/capacitor whine, so I'd prefer to push the frequency above human hearing, to say 30 kHz or above. At those frequencies, we already have effects like even a very short pulse on the transistor base yielding a relatively long pulse of conductivity, which limits the minimum on-time (as I discovered earlier in this thread). Using the transistors in their linear region does mean the difference between the supply voltage and the forward voltage of the LEDs is wasted as heat (in the transistors and the resistors), but avoids the aforementioned issues completely, plus makes it easy to control the LED current with a control voltage basically linearly; which in turn allows me to use a DAC for fade-in and fade-out, and a physical pot to control the steady-state brightness, as that seems to me to be the optimum for this particular use case.
The one quirk in this design is that I specifically wanted to be able to easily adjust the current to each LED (if possible), in relation to each other, to account for any imbalance in light output at the same current between the LEDs. Here, the 68 ohm resistors will actually be pairs of 0805/0603 resistor footprints, so that if I do need a bit of adjustment, I can do so by putting two resistors in parallel. For example, a 100 ohm in parallel with a 200 ohm one yields 1/(1/100+1/200)=200/3≃67 ohm. I do not expect to need to do this, but since I'm making only one or two of these, I want the ability to do so if needed, rather than having to order another display (which typically takes weeks to arrive).