Yeah, that is the standard response. But I'm not buying it. No other part of the audio chain requires more than 30-50V max (from - rail to + rail) even for +24dBu line level. What exactly is the point of generating perhaps "line-level" signals (which are perhaps 1~2% of 130V) and then knocking it back down to "mic-level"? Something is just fundamentally goofy here.
The 130V supply is derived – I think, from the older days where the high voltage was needed as anode voltage to amplifiers with tubes.
In more contemporary semiconductor designs, this higher voltage is needed where the std. P48 Volt becomes inadequate – theoretically at least.
Let say, that we have a capsule with a sensitivity of 10 mV/Pa. If this capsule is exposed to its stated 168 dB absolute max SPL handling with THD < 1 %, the output voltage is 50.12 V peak: 94 dB – 168 dB = 74 dB. Where, inv log(74/20)10
-3 = 50.12 V.
A microphone connected to a IEC 61938 compliant power supply with a current draw of 2.5 mA will sink the phantom voltage to 31V.
In other words, the voltage provided will compromise the signal handling capability of the amplifier, resulting in audio distortion.
Common practice is to attenuate (PAD) the mic output signal by 10–20 dB. Of cause it dosent help much if the amplifier is already distorted.
On some mics, a PAD is placed in the front end circuit, unfortunately this is the worst place to attenuate the signal with respect to SNR.
For most users, the “high voltage” principle is merely a marketing ploy in order to justify esoteric audio gear – you know, the average Joe equipment can’t possibly satisfy the discerning audiophile…
/Peter