The question in this thread wasn't about an AI making life changing decisions, or needing to produce output that's knowably, objectively correct.
In this case, the challenge was to take two self contained documents, each describing something that's finite in scope and presented in a common, structured way, and make inferences between them which can then be checked. That shouldn't be hard, and how it works behind the scenes is irrelevant.
I'm just trying to save time by offloading a relatively straightforward problem to a computer. I could equally well give the task to a junior engineer, but I don't have one to hand right now.
I do have a paid ChatGPT subscription, though - one which has already paid for itself many, many times over by teaching me some correct science that enabled a substantial cost saving in a product that's now shipping. Like it or not, believe it or not, it's a much better teacher than asking similar questions on a forum. It never says "do a search" or "your approach is wrong" or "I wouldn't do it like that" or any answer that's just plain unhelpful - it just answers the questions asked of it to the best of its ability.
Sometimes it produces complete garbage, just like people do. But on technical topics, it's surprisingly accurate. Really surprisingly accurate. And, of course, the beauty of correct science is it can be readily verified and proved to be so.