Ordinarily I might agree with you but the thread in question is so vile and polluted I don't think it is even possible to have a sane conversation within it. I tried to point out a few things that I thought would lead to some agreement, but I regret even wasting the time.
I avoided that thread for the same reason. However, starting a new thread with an insult is not a valid way to start a conversation, is it?
(I understand that to some, it is not an insult. But to me, the entire pattern [of starting a new thread with an insult, claiming that no-discussion is the "professional approach", with their own opinion as "the obviously correct answer, case closed"] is a very sore button.)
I really,
really hate such detestable attempts at social manipulation.
To me, the entire discussion is a bit funky, because as nctnico said,
IMHO the whole point is to understand when a simplification works and when not.
and this observation should also be extended to the model used to describe a situation.
One thing I absolutely love about physics simulations is that no matter what you do, the first step when you get some results is to analyze
whether it makes any sense. A part of that is to guesstimate the various factors (say, to within a few orders of magnitude), whether the model includes everything that should be included, and so on. Only a small part of that is estimating the approximations used; it is the appropriateness of the model used that is the key.
I do find it interesting that the skill of shifting ones mind across model complexity levels is relatively rare. In programming, I've seen flamewars between top-down (starting with an overall plan, and finishing with actual code) and bottom-up (starting with actual code, then connecting them together, to build a larger more complex whole) approaches, that completely baffles me: I proceed with the hardest problems first, until I can build a reliable model of the end result, happily skipping between complexity levels as needed.
I can see exactly why the same would happen with those who are used to treating everything as a field, and those who work with circuits.
Perhaps it is too hard for most humans to encompass both at the same time –– it is to me, that's why I have had to learn to skip and switch as needed ––, but assuming ones favourite model suffices everywhere is ... well, insufficient/wrong/silly. In physics simulations, one would trip on it immediately, and fail.
Molecular dynamics is an excellent example. If we take only the outermost interacting electrons in atoms, and model the rest of the atoms (both nucleus and the rest of the electrons) as a single point charge, and only consider the rest states of each atom, we can model all chemical bonds to a very high degree. (The charges themselves are modeled as quantum mechanical waves, see e.g.
Hartree-Fock method; this is why these simulations are called "quantum mechanical" or "ab initio", starting from the simplest possible interaction model.)
However, the math is so onerous, that even the largest supercomputers have issues with more than some thousands of electrons. Simplify the interaction model, for example via Embedded Atom Model for metals (albeit you need multi-band EAM for some metals like ferrochrome), and you can model millions to billions of atoms, and get essentially the same results.
Which one is correct? Well, neither, because
both are approximations. For some systems,
both are precise enough to yield useful information, and are used every day in materials research (even in now-mundane things like thin film tech, ion implantation, and so on).
(And yes, there are lots of QM/ab initio simulators using VASP or Dalton, and scoffing on those who use classical potential models or force fields (naming varies between physics, chemistry, and biology, even though they all do more or less similar simulations). It, too, is horribly silly. And very often leads to someone, usually an established professor, making an argument from authority, which is even more disgraceful.)