Perhaps out of fear of it someday becoming self-aware and somehow judging society or simply because it feels proper, who says thank you to AI interfaces like chatgpt after it assists you with something? Maybe even for the simple chance that the people who worked hard building it have access to those statistics? Im really curious to know how many people do and why.
Short answer, no.
Life is precious, and the span of a lifetime in this world is limited. Wasting your time thanking to a computer program won't help anybody, won't please the programmers, and won't buy you any penitence in the eventuality an AI would turn against humanity.
Does people thank a google search for an answer? Thank the browser for showing a webpage? The phone for delivering a message, or the walls of a building for shielding against bad weather? I guess no.
ChatGPT becoming sentient is a far fetch. Won't happen any time soon. If you doubt about that, tinker a little with AI, see what's under the hood, look into how it is trained, observe how it works, and you'll see it is nothing to worry about.
To simplify, CharGPT is like a big lookup table, like a dictionary of fuzzy tokens.
Instead of looking up words, ChatGPT lookup for tokens. Tokens are similar with what we call concepts. Tokens are extracted from huge collections of training text (wikipedia, github, textbooks, Internet, etc.), by observing the proximity between words. Some words tend to stay grouped together, similar with English idioms.
Then, the AI take your input request, and lookup for the closest match in its dictionary of tokens. It will return words that are usually found nearby the tokens you just gave as an input request. It's a fuzzy lookup for tokens, with random noise added while generating an answer, such that it won't repeat itself.
Of course, in practice it is more complicated than it can be addressed in a short paragraph. The description above is not meant to be taken verbatim, but those are the main mechanisms behind the magic.
The illusion of sentience is amazingly good when looking only at a reply, without considering the processes behind that reply.
Another aspect to consider, tokens can be anything, not only text. Can be sound, images, birds migration patterns, or who knows what other kind of data might be of interest for a given application.
Perhaps out of fear of it
You should not fear the machines.
The evil is not in the machines, but in the people that might use those machines in order to take advantage upon other people. This is the real danger with any new technology: the human nature.
Never assume all people are good, only because you are. In reality, some people will do atrociously evil things to others, whether or not they do realize.