I appreciate your efforts to really dive into the theory and try to understand it. At this point, I have not had the time to fully understand the entire control mechanism on a very deep level. It is a testament to the relative ease of use of these new controllers. So, unfortunately, I'm probably not of much help answering some of the deeper questions. But here's my attempt:
Regarding RC snubbers: You will need a snubber in most applications. You can use two, one across each MOSFET, or you can just put one across the transformer winding (from drain of SR MOSFET #1 to drain of SR MOSFET #2 drain). This should be enough in most applications.
Regarding the new style SR controllers, yes indeed they have a very fast control loop to regulate drain-source voltage to some level. Controllers from a few years ago were maybe 50-75mV level. Newer controllers are 20-50mV level. As I understand it, the point of regulating VDS via the gate voltage is to keep the MOSFET channel conducting near the end of the switching cycle, albeit at a higher effective RDS(on) than on the MOSFET's spec sheet. Still, it's better than conducting purely through the body-diode.
In traditional LLC SR controllers, like NCP4303, the gate drive is either 100% on or off. The turn off threshold is a fixed value, and the controller is likely to shut off earlier (compared to newer controllers) to avoid negative current. The percentage of switching cycle through the body-diode may be too high, which leads to the I*V(body-diode) losses noted above. For the new style controllers, the VDS regulation voltage is on the order of 50mV, which might allow you to reduce the losses (near MOSFET switch-off) by up to 10x if you assume body diode drop of around 500mV.
On TEA1995T, MP6922A, UCC24624 (and others), when dI(drain)/dt is negative (second half of the half-sine), instead of shutting off when VDS collapses to a set value, the gate drive voltage is reduced to push up the on state resistance of the MOSFET. With the controller modulating the RDS(on) , the drain voltage can be regulated to a "reasonable" level that is detectable via internal analog circuitry (i.e. ~ -50mV). The level is probably limited by sensitivity and noise performance of the IC's sensing circuitry (I presume). If it's too low, probably gets lost in the noise and then you get unreliable switching.
I do not know why TEA1995T proclaims no-minimum on-time control...I presume it must have some? Maybe they're saying there's no need for you the engineer to worry about it. They've got it taken care of?
Regarding SMD MOSFETs, on our low-cost 180W power supplies, we are using 2 x TO-220 with a single-side PCB with TEA1995T. Very average layout, nothing to write home about. The SR waveforms are all fine and have not seen any real issues with gate timing, and handles all fault events no problem. We are also using 2 x TO-220 for a 450W power supply...Only the 12V version with currents around 40A gave us trouble with TO-220. The rest are good. Originally used UCC24624 for the 12V version, but this controller has the disadvantage of a single source sensing connection for the two MOSFETs, and the body-diode conduction period was too long. We tried TEA1995T with separate source connections, and lo and behold, much better performance. Granted, I must admit that the layout is not ideal, and would probably have been okay with a better layout.
Power dissipation wise, a PowerSO-8 (LFPAK56) style package can handle at least 200mW without any real PCB heatsinking, for acceptable temperature-rise, in my testing. Add some PCB heatsinking, and you can easily push 500mW+. So if you run the numbers, you'll probably see why it might be okay to run single MOSFETs for output powers 200-250W, maybe higher. I think the eval boards probably come with the SMD packages because it does maximize SR controller performance, and they probably want you to see it working the best it can.