A human can read many books and as a consequence learn something by training the neural network in his brain. If he then uses this knowledge to i.e. write a book, nobody would rais IP issues withe the authors of the books that were read, unless whole sentences would have been copied, which is not the case in this example.
Or, consider a painter that is inspired by other contemporary painters. This happend for any major style, from expressionism to cubism, from renaissance to abstract painting. Just go to the Louvre and see how painters inspired themselves.
With AI, the learning works identically by training a neural network. The original data is not stored in the database. If you would search the GPT model, no readable content would be found.
The only valid objection would be: has OpenAI the right or authorisation to read the books? Are the books open source or was the book purchased?
Except, that none of those human painters could produce paintings on a thousands/second scale, aviable globally.
And although everybody learns from everybody, intellectual property rights are there for a reason.
So if you think AI should be treated like a human, just because some aspects are the same, than with this logic you could also treat a table as a dog.
"Except, that none of those human painters could produce paintings on a thousands/second scale, aviable globally." --> that has exactly nothing to do with IP matters
"And although everybody learns from everybody, intellectual property rights are there for a reason." --> IP is mainly obsolete and the IP of a book expires 75 years after the death of the author (if I recall correctly), so many books used by GPT have no IP - you can seatch Project Gutenberg to get an idea of how many books there are!
"So if you think AI should be treated like a human, just because some aspects are the same, than with this logic you could also treat a table as a dog." --> sorry, but I don't understand your example (table for dog). A neural network does EXACTLY try to replicate the way a biological brain works. It solves problems in the same way. And, incredibly, a side-effect of GPT is that it can provide insight on how the brain really works. So, yes, in my opinion it makes sense to treat content created by AI in a similar way as if it was created by a human.
This link has already been posted on this thread:
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/I really recommend everyone to read it. It is written by the creator of Wolfram Alpha and goes into length explaining how GPT works, why it works so good and what we can expect.
I would say that 2023 is the year in which mankind managed to replicate a neural network in the size remotely comparable with the one of the human brain (smaller, but in the same league).
According to Stephen Wolfram, the main current limitation of GPT and alikes is the fact that the neural network is trained and operated sequentially, while the biological brain works in parallel.
Also, the reason we get GPT in 2023 and not like 10 years earlier is because of the sheer computing power required, which was not available sooner. And, of course, the cost resulting from this requirement.
Coming back to IP:
When you buy a book (analog or digital), you "own" the book and can read it as often as you want. You can learn from it. The knowledge you gain is yours and is not linked to any contract to the IP owner of the book. Consider i.e. a physics, electronics or math book. You buy it to learn from it. The only think you are not allowed to do is to replicate the book or parts of it. You might do some quotes, but they need to reference the book.
If AI "reads" the book it does learn from it, too. It does NOT store its contents, though. It will, instead, train the underlying neural networks. What gets stored are parameters. These are abstract and you wouldn't be able to reconstruct the original book from them. No copyright violation in sight.
The only debate is: could an author (of a book, website or image) refuse to allow his IP to be processed by AI?
Where does AI start? What about printed text to audio readers? Could those be used?
What if you train the AI by reading the book to it? Would that be acceptable?
Can AI watch all free TV channels and learn from the broadcasted contents? They are free to air, so why not put a "bot" to watch the contents and learn from it?
All of this is far more complex and my bet is that it is virtually impossible to decide what to do.
Imagine portable super computers, the size of a mobile phone: the student takes it with him to his Uni lectures to train "his" pet AI...
I have to stop writing here, because I am starting to get into SCI-FI mode.
But to finish it off: look at the technological complexity involved until humaity got GPT-4.0. And still, it is only a fraction of what a human being can do. Consider that a human is a self-replicating machine! Now tell me that life on earth started by casual proteins mingling in oceans and through evolution led to humans. This in just a few hundred or thousand million years? Hard to believe.
Regards,
Vitor