I have looked at the PCB in the Autoprobe (scope) end of my dud 1156A and know which resistor to change - the one that's in there is 46.4k which is the value that, according to
US patent on Autoprobe, tells the scope to talk digitally to the probe via I2C. Although I haven't seen the PCB from inside the 1152A, I'd bet a pint or 3 that it's the same PCB because the one in my dud 1156A has components missing e.g. U4 on the bottom side - as an electronics designer, I'd take the same approach, have a single PCB that can be used in multiple products. Anyway, Table II under column 6 in the patent gives values for the ID resistor that allow you to chose input ratio, impedance, and whether or not offset is used.
The bit I don't fully understand is the offset system - I can see that the scope has the ability to adjust for a volt offset (I think) but is it done during calibration and, if so, where is the calibration stored? When I power down my 7104B with the 1156A (good one) plugged in and calibrated, it comes back up as calibrated when I power up; remove the probe for 2 seconds and plug it back in and I think it needs recalibrating.
To make the scope work with the 1152A, option 1 would be to swap the ID resistor on the PCB but option 2 would be to decipher the contents of the serial EEPROM that's on the board and then rewrite it to fool the scope into thinking that the 1152A is a different supported probe. Of course Daniel's right that there are probably reasons why the 1152A was not supported - call me cynical but maybe one was that Agilent wanted to sell more probes?
If you want to go look, I did
a tear down thread on the dud 1156A, it turns out that there were broken (blown?) wire bonds on the dead bug IC in the business end of the probe; my pictures also show the PCB in question.