Hello all,
I am currently struggling with the input-voltage dependent behavior of a TPS62148 buck converter:
- Attachment 1 shows a block diagram of the entire system
- Attachment 2 shows the detail of the Power supply block
- Attachment 3 shows the profile of the current draw from supply, measured over a 1.2Ohm shunt with the supply voltage set at 9V (the alkaline battery pack has been replaced with a power supply for this measurement). So the unit draws some cvasi-DC 115mA in normal operation, with minor ups and downs.
The problem I have with it is that when the battery voltage drops, going below about 7.5V, the product operation is affected even at room temperature (requirement is to operate down to 6V supply voltage, -20...50C temperature). The unwanted mechanism has been determined as being a change in the operating point of the switched mode power supply, which results in more radiated emissions, which get picked up by the L1 sensor coil. The L1 sensor coils is physically closer to the PSU and only it is affected by this. L2 is some 10s of cm further away and has no problem. The additional noise picked up by the L1 coil results in the various measurement algorithms running on the microcontroller to return bad results.
The behavior cannot be reproduced when using a regular bench power supply instead of the alkaline batteries. In such a scenario, everything works fine down to 6V. This leads us to believe it is the non-ideal output characteristic of the battery (including output impedance) which gives us problems.
A side-question of this thread is why I am not able to reproduce this behavior even when trying an external power supply with some artificial series resistance (up to some 10 Ohms), in order to simulate the non-ideal ESR of the battery pack... in such cases everything still works fine.
The main question is how do I solve the problem?
So far we have determined that a 1500uF capacitor added directly across the battery pack terminals significantly improves the situation. As I understand it, the effect of such a capacitor is to significantly lower the battery output impedance; I am happy to coroborate this piece of theory with what I notice in practice, and the hope is that once we will be able to place the capacitor in a more meaningful position (on the PCB close to the input of the switched mode power supply) things will get even better (at the moment we have some mechanical related difficulties in getting the capacitor on the PCB).
I currently have a package with electrolytics on my way from Digikey to try different values and ESRs. The reason for which I suspect such a large capacitance is necessary is as follows: the product we have designed is essentially a more modern version of another one which we have in the field (which cannot be produced anymore, due to obsolescence of many components, it is really old). The old version of the product uses a completely different power supply, something based on an LT1616. This product has a weird input filter just before the power supply. It is a PI filter with 1000uF capacitors and 1mH inductor. We never really understood the reason for which this filter is in (we do not have to pass conducted emissions tests) and the engineers who designed the product back then are long gone. So we left it out from the new version of the product; of course, that bytes us now, as a side-effect of such a filter, with its big capacitors would have obviously helped with battery output impedance. But why was such a PI filter used, instead of just plain simple capacitance...still eludes me.
Prequel to this thread is:
https://www.eevblog.com/forum/projects/aluminum-electrolityc-capacitors-operated-close-to-rated-voltage/Regards,
Cristian