I don’t know if I am not asking the questions the right way, or the AI is simply screwing up.
It’s a huuuuuge and complicated mathematical equation, in which parameters were adjusted in millions of repeated test-adjust attempts. For a bunch of input numbers it produces a bunch of output numbers. The parameters were chosen in a way, that human brain seeing the output will think they are a sensible result of the input.
The smortnet does not think in usual sense of this word. It can’t understand its own outputs, as it doesn’t even have a capability to perceive those. Not to mention having any circuitry to correct/reject them.
Performing such operations is not even limited to machine learning. One can construct a deterministic system, which will do the same. The difference with smortnets is that they are resilent against input noise,
(1) though the consequence is the noise you get in the output, and that constructing them is computationally uncomparably cheaper than what would be needed in older, deterministic systems of a similar scale.
(2)
(1) For a few recent, famous solutions noise is actually injected into the system to obtain the results.
(2) Strictly speaking: traditional solutions would be infeasible at this scale. You could construct a hash function, that does exactly what ChatGPT does, but it would take multiple ages of the universe if not more.