I think the biggest problem isn't so much the limitations of AI, but the expectations.
<whatever>GPT may take input data, mix it up and regurgitate it in a way that sounds plausible and confident, without ever really understanding it, and it makes mistakes which we call hallucinations. Rather that recognise its own limits and say "I don't know", it makes up rubbish.
SO DO PEOPLE...!
Say you have some medical symptom that you want help with. A normal interaction might be to describe it to a doctor - a person who is highly trained, skilled and experienced - and receive high quality information in return. In terms of expectations, the bar is set quite high.
But describe that symptom to an ordinary person, who is not specifically trained and skilled, and the quality of the response would be much lower. Quite likely it would be inaccurate, likely harmful, and certainly no better than asking a completely random friend or family member.
The AI isn't expert, but it is doing what an ordinary person would likely do, and in many cases with about the same level of competence.
I've found the paid version of ChatGPT valuable as a technical assistant. It has an incredible ability to take a piece of code, infer its intended function, then spot bugs. I've had conversations with it on technical topics, where it does a remarkable job of actually answering direct questions, in a way that's correct enough that I can learn new material that's professionally useful to me.
It never says "do a search" or "get a textbook" or "I wouldn't do it that way", or any of the other useless replies that all too often come from asking on a forum. Sure, it makes mistakes, but again, so do people, and filtering those out is a skill that any good scientist has anyway.