Noise is strictly worse, comparing devices of comparable ratings and bandwidth, more or less simply because the cathode is several times hotter than ambient (absolute temp). Which incidentally also determines the exponential cutoff slope (tubes, like FETs, have an exponential subthreshold cutoff region; above threshold for tubes goes as Iout ~ Vin^(3/2); FETs, ~Vin^2).
A given device might have lower noise, or in a given frequency range (RF tubes can be quite good; TWTs are still a good choice for certain microwave applications, mostly aerospace; those might finally be getting displaced by GaN these days, I don't know), but pound for pound so to speak, transistors win; and cryogenically chilled transistors, even moreso.
For anything audio, rest assured the exclusive and sufficient answer is: marketing. A case could kinda be made for competitive purposes in things like guitar amps, where That Tube Sound(TM) is easier to produce, in terms of design effort, than designing a DSP filter of equivalent performance. But the DSP itself, ADC, DAC, and the chip amp following it, are far cheaper (and more efficient, and more mechanically robust, and more reliable..) in any kind of quantity (100s? 1000s?) where that design effort is amortized away.
Tim