I don't think that's settled yet:
They are talking about going back in time. I did not say that. I said that there is no objective reality. It is observation dependent. Which is actually fully logical on the macro scale, but because we had exact measuring equipments in the last centuries, we had the illusion, that this is an illusion, because we can measure thing fully objectively. We may not it seems.
And anyway, people don't deal that good with truth.
Ultimately, people will not accept AI telling the truth.
At least, hopefully not.
Depends on the topic.
I can guarantee that politicans will not allow AI to tell the truth about politics, they will ban it before that happens. The entire business of politics is built on the perception of "truth", not truth itself.
But it doesnt't work like this, that the politician goes with the audience into a big hall and asks the AI.
Political oppinions(of politicians as well) are based on some science, some interpretation, some discussions about it. Now just a lot of interpretation and discussion will go missing, because people will get used to just ask the AI.
Like Bicurio wrote later which I will respond to as well. Lets not read thorugh all that stuff , just use the AI to collect is. But this will be lessen the time and effort people will spend on the subjects and the details will be lost. Also human experts will have to debate with the AI. But after some tests to compare them, the humans will loose on many subjects, and that will be interpreted, as the AI being smarter.
And of course the AI can be also biased intentionally by the creator which is also a trap.
Just like now some "scientific evidence" in some popular topic-I don' want to name them of course, but I am not talking about the climate crisis because that one is really checked by many scientist for a long time now- is just bullshit, any anyone with a minimum IQ can debunk it, but is sold as science, and some politics is based on it. (Also I belive some politicians truly belive it, they are just like other people.)
ChatGPT is one of several AI tools that is/will be available in 2023.
It is a disruptive technology and it will change humanity over time, just like the mobile phone or the internet.
It will replace many jobs: from help desks to phone services, from translators to general content generation.
I have no doubt about that.
As a University teacher, I have scheduled a meeting with all my department colleagues to explain them what ChatGPT is and what will change this semester: I will actively introduce students in my class (Programming in a non IT course) to ChatGPT and I am giving my colleagues a head start to prepare for the consequences.
There are 3 possible reactions a University can have to ChatGPT:
1) Simply ignore its existance
2) Forbit its use
3) Embrace it
Those teachers who chose 1, will have all (!) students using ChatGPT to aid them in writing all sorts of essays, reports, etc. They will be scored for a document written by a software...
Those who try to forbid it's use will be in the same situation as the former group, because they won't be able to detect if a text was written by ChatGPT. I have generated articles about random technical subjects and then submitted them to plagiarism software and the text were considered 100% genuine. The students I asked about ChatGPT all knew about it and some even tested if ChatGPT would produce the same text upon the same input made by different students (accounts and computers). The result were different texts! This means that you give the same subject to several groups and they all hand in different texts, all generated independently by ChatGPT.
Also, I tried ChatGPT on different subjects I know about and the texts created were of surprisingly high quality. I even tried questions of current exams of mine (relative to CAD/CAM/CAE subjects) and the responses would get full scores.
This means that ChatGPT has made it obsolete to ask students to write reports about some sujects. A machine will do it better and much quicker. Period.
Want bibliographic references? Ask ChatGPT and it will provide you with the relevant keywords and tell you what databases to search. With little effort you get the references and you can plant them on the text generated by ChatGPT.
I fell 100% confident to do a 30 minute presentation about ANY subject in the world if I get two hours of preparation with ChatGPT and Google. This is disruptive! I am no longer a specialist about a given subject, I am a specialist in using ChatGPT and Google to produce the content I need, as well as, to filter and process said information.
And this is the key point one needs to focus: how to use ChatGPT to quickly learn about any subject or to quickly get the starting of an essay.
Universities need to change the task of "write an essay about XYZ" to "use ChatGPT to obtain an essay about XYZ and then discuss the outcome, verify the statements and complete them".
As a result, people will take less time to learn something, to get the relevant information.
A different aspect is that ChatGPT presents wrong information on certain subjects like math or logic. It is fundamental to understand how ChatGPT works and where it will fail.
Simple example: "5 machines produce 5 parts in 5 minutes - how long do 100 machines take to produce 100 parts?"
Most people will answer 1 minute, 100 minutes or 500 minutes. The reason being that you need to correlate 3 things, while the brain normally is used to correlate just two items.
As a result people might focus in 5 parts in 5 minutes and conclude 1 part takes 1 minute, forgetting that there were 5 machines.
ChatGPT will give the wrong answer over and over.
Same goes for simple algebra. While the result given looks very good, it is simply wrong.
But all in all, this is an amazing technology!
I expect to be able to call my bank and instead of having to fight through a stupid voice enabled option tree, I can just say what I want:
"Hello, my name is Vitor XYZ, my birth date is xxxx, please unlock my credit card, because I keyd the wrong pin 3 times."
"Sure. Can you please confirm me your VAT number?"
"It's 1234"
"Thank you. I am happy to inform you that your credit card is now unlocked. Anything else?"
"No. Bye"
The same with any other service accessed through phone.
And why should I worry about the content of our web site? I hope they offer us a service, where I specify the subjects and products, so that ChatGPT can generate new content every month.
Next Microsoft Office Word will have a function where I start to write a report and after a few sentences, I click on AUTOCOMPLETE and the remaining report is written automatically.
Same for emails.
Hope I gave you some ideas...
Regards,
Vitor
Well, than we just have to reorganize the whole university education system within a year in the whole world. Easily done isn't it?
Also attaining something using the AI and not collecting it will mean less thinking about the whole subject. If you have some thoughts about a topic you clearly don't have a problem to write a lot about it.
And if you have a good general idea, it will make it more difficult to weed out of the unknown incoherences of the AI created part.
So this will make students focusing less on the subject, and more on how to use the AI.
And when the whole university world got used to one AI there comes another or a modified one.
But if university professors didn't have too much to do until now, from now on they can spend most of the time tackling the AI stuff, which will also change constantly, so the detection methods and usage methods would also constantly change. Also the political censorship will weed out a lot of things from the science, because some things will suddenly disapear from the studies and some will appear and that disproportionally.
"
And why should I worry about the content of our web site? I hope they offer us a service, where I specify the subjects and products, so that ChatGPT can generate new content every month.
Next Microsoft Office Word will have a function where I start to write a report and after a few sentences, I click on AUTOCOMPLETE and the remaining report is written automatically."
And do you employ some people to actually doublecheck what you offer, or you just trust the AI blindly?
And how does the autocomplete know what YOU wanted to write?
BUT if everything works really well that means that humans are not needed for thinking. Than for what else are they needed for?