Author Topic: Dropping threads due to posts about GPT hallucinations?  (Read 1861 times)

0 Members and 2 Guests are viewing this topic.

Offline mendip_discovery

  • Frequent Contributor
  • **
  • Posts: 929
  • Country: gb
Re: Dropping threads due to posts about GPT hallucinations?
« Reply #25 on: June 02, 2024, 11:43:05 am »
I wish we could have moved straight to the star trek style of computer where it can do a lot of stuff for you. Problem is you still need to know what question to ask it.

For the masses and management I can imagine people loving it as it gradually erodes peoples ability to think. But I suspect this can be said for every new trend in tech.

I will try it every now and then but I prefer to use my own brain and search skills. If not i have you wonderful people to ask.
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 

Offline tooki

  • Super Contributor
  • ***
  • Posts: 12047
  • Country: ch
Re: Dropping threads due to posts about GPT hallucinations?
« Reply #26 on: June 02, 2024, 11:44:13 am »
   Had to go use 'google' to describe the means ng, in more exact terms:
   GPT. = Generational Pre-Trained
Uh, no. GPT stands for “generative pre-trained transformer”.
 
The following users thanked this post: Smokey, RJSV

Offline tooki

  • Super Contributor
  • ***
  • Posts: 12047
  • Country: ch
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #27 on: June 02, 2024, 11:51:13 am »
IMHO what is good about ChatGPT (or better put, the technology behind it) is that it can take in a lot of data and extract useful excerpts from it.

ChatGPT is just a "better Google". People are so excited about it because classic search engines, for some reason I can't understand, not only stayed in stone age but actually regressed. For example, the Google Search (the website / text search, not talking about images, maps etc.) was in its peak technical performance somewhere in 2003. Since then, it was purposely made worse and worse, with the aim to return maximum number of search results, instead of high quality results. The starting point in late 1990's was OK-ish: search engines returned web pages which contained the words in the prompt. Mediocre, but very simple, and you usually found something about the actual subject you wanted to know within 50-100 first results.

In early years of 2000's, Google made some algorithmic improvements invisible to end users which improved results and brought relevant information to the first 10 hits, but then downhill started soon, and still in 2020, if you need any specific information, it's simply impossible to find using Google.

For two full decades as of now, Google's primary use is to allow people to type "face bok", get 1000000000000000000 results and find facebook.com as the first result. For any more specific research, you can't find anything.

Then came ChatGPT so that you can ask a specific question, and get random and mixed quality answers, it's like Google in 1999 just formatted in more verbose bullshit boilerplate structure, plus of course the fact that you have to separately ask for the references, and if lucky, get something. It's also pretty good at finding what you actually wanted to know, even if your keywords are not exact. Similar to what Google researched on their algorithms in very early years of 2000's.

ChatGPT is just a "better Google". People are so excited about it because classic search engines, for some reason I can't understand, not only stayed in stone age but actually regressed.

Speaking of search engines ... you can go to eBay or other commercial places and type something and get results for what you want .... except Amazon. For reasons I do not know they give you mostly results totally unrelated to your search. I have no idea why but I find it frustrating. In the end I end up looking for products elsewhere and sometimes return to order at Amazon if they have good price and conditions. I don't get it. This is not artificial intelligence, it is artificial stupidity.

It’s not artificial, and it’s not stupidity as such, it’s algorithms designed with a very different goal in mind than what it used to be: in both of these cases, they goal has changed from “show good results” to “keep the user ‘engaged’ so we can show them more ads.”

Both Google and Amazon now ruin their results so that they can show us sponsored ads. If there aren’t enough organic results to keep us “engaged”, they’ll show us junk results.

This isn’t speculation, by the way: it’s now documented that Google’s head of advertising, who then became in charge of the entire company, ordered the search team to increase “engagement”. Accurate search results cause people to leave (since they found what they wanted).
 
The following users thanked this post: artag

Online artag

  • Super Contributor
  • ***
  • Posts: 1163
  • Country: gb
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #28 on: June 02, 2024, 12:24:01 pm »
The biggest danger I see is that people are using AI to create things they don't understand. Last week I was in a meeting with a very junior software engineer who had used ChatGPT to write a bit of code. This code does what is is supposed to do, but what if it doesn't? Or what if the code needs changes? Then we're screwed and I likely find myself doing a job which was not assigned to me.

It seems that if you use ChatGPT to 'write' code, you'll have to be very precise about the instructions you give it (and that, to me, is merely writing code at a higher level of abstraction)  but worse, test it or review it very thoroughly to see if it did something stupid.
But that's no different to giving the job to a very junior engineer - except that the engineer will learn, but the bot won't, at least for today's open access bots. You can only teach them by stuffing the prompt with rules.

I don't consider this very valuable. It's OK to train a person - you get increasing value back out of them, including the ability to pass on the training. With a machine, you have the possibility of training once and duplicating but you're not even getting that (though perhaps developers using these tools privately might).

What it does make me wonder is if the problem can be turned around. We see lots of examples of bot-generated code of various degrees of quality or lack of it. But can we use it to generate tests ? My suspicion is not, because it requires reasoning rather than correlation to get from a spec to a test suite. But I would be interested to hear the view of someone more experienced both in getting useful work out of LLMs and of writing test suites.

I'm thinking here specifically of generating tests for human-generated code. AI-generated tests for AI-generated code are likely to be self-fulfilling, though using two different AI programs could be interesting.

As for the actual question starting the thread - yes, the novelty of getting seriously wrong answers is wearing off. It's like those 'stupid exam responses' you read, they get decreasingly humorous the more you read of them. But people will get over it. And use as a human-language interface to search engines will soon be destroyed by the same advertising pollution tooki describes.
In general, I think this phase of AI will pass just like all the others, spitting out a small nugget of usefulness as others have done and going quiet until the next time. It's a shame, I'd like to see real machine intelligence (as opposed to well-named artificial intelligence that has the appearance of intelligence but isn't) and a program that can perform the level of intelligence shown by script-reading tech support drones might have some limited use. But in the developmnent of machine intelligence it's at worst a dead end and at best a tiny fraction of the puzzle useful only in generating human-like responses or rote-leaned facts rather than reasoning.
   
 
« Last Edit: June 02, 2024, 12:38:08 pm by artag »
 

Online AndyC_772

  • Super Contributor
  • ***
  • Posts: 4260
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: Dropping threads due to posts about GPT hallucinations?
« Reply #29 on: June 02, 2024, 12:49:11 pm »
I think the biggest problem isn't so much the limitations of AI, but the expectations.

<whatever>GPT may take input data, mix it up and regurgitate it in a way that sounds plausible and confident, without ever really understanding it, and it makes mistakes which we call hallucinations. Rather that recognise its own limits and say "I don't know", it makes up rubbish.

SO DO PEOPLE...!

Say you have some medical symptom that you want help with. A normal interaction might be to describe it to a doctor - a person who is highly trained, skilled and experienced - and receive high quality information in return. In terms of expectations, the bar is set quite high.

But describe that symptom to an ordinary person, who is not specifically trained and skilled, and the quality of the response would be much lower. Quite likely it would be inaccurate, likely harmful, and certainly no better than asking a completely random friend or family member.

The AI isn't expert, but it is doing what an ordinary person would likely do, and in many cases with about the same level of competence.

I've found the paid version of ChatGPT valuable as a technical assistant. It has an incredible ability to take a piece of code, infer its intended function, then spot bugs. I've had conversations with it on technical topics, where it does a remarkable job of actually answering direct questions, in a way that's correct enough that I can learn new material that's professionally useful to me.

It never says "do a search" or "get a textbook" or "I wouldn't do it that way", or any of the other useless replies that all too often come from asking on a forum. Sure, it makes mistakes, but again, so do people, and filtering those out is a skill that any good scientist has anyway.

Online coppice

  • Super Contributor
  • ***
  • Posts: 8967
  • Country: gb
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #30 on: June 02, 2024, 12:55:13 pm »
In that light, GPT answers really are just hallucinations, with an accompanying "trust me bro, I read the whole internet once" assurance.
Well, that is the problem. Up to GPT 3.5 it had only read the internet. People don't write much about the obvious, as its common sense. So, there was no common sense for GPT to read, and learn. That explains a lot of its limitations. GPT 4 has started to look through images in large numbers, and that seems to have taught it a lot of common sense, because that stuff does get visualised.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8340
  • Country: fi
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #31 on: June 02, 2024, 05:10:00 pm »
The train is leaving the station before it’s even been assembled. The wheels are square, the cab doesn’t have a speedometer, the brakes don’t work, and the track gauge is different on the front and back bogies.

I don't think this is a very good analogy, because those are things that can be improved and fixed quite easily. I mean, the train is half done already, and you only need to apply more of the same thinking to finish it - the train would be operational "in just a few years". This is what people are expecting from AI year after year after year, but it never happens.

Even better analogy, IMO, would be a good old cargo cult train made out of tree branches and plastic bottles, painted in gray paint so that it looks metal. It is bogus to begin with, and will stay that way. Modest improvements can be made, e.g. better paint job, but it will never operate as a train. Just like the language models won't ever become "intelligent" but continue producing appealing text with absolutely no attention to correctness of facts, because it is not its purpose.
« Last Edit: June 02, 2024, 05:11:36 pm by Siwastaja »
 
The following users thanked this post: artag, tooki, Nominal Animal

Online AndyC_772

  • Super Contributor
  • ***
  • Posts: 4260
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #32 on: June 03, 2024, 09:22:06 am »
the language models won't ever become "intelligent" but continue producing appealing text with absolutely no attention to correctness of facts, because it is not its purpose.

Again, I think this is all down to expectations. It's doing exactly what an ordinary person does when they're trying to "fit in".

Think back to what it's like being at school. Consider how it works to be socially accepted. It's not about in-depth understanding or being objectively correct on topics where there is such a thing, it's about saying the kinds of things that others agree with and can relate to, and which don't make a person stand out and risk making them a target.

This is why I've found it most useful when talking strictly technical topics. It may be trained on a lot of code that's subjectively 'bad' to some people, but 99.99% of it does at least compile and perform its intended function, at that's made it good at spotting flaws that mean code simply doesn't work.

Online coppice

  • Super Contributor
  • ***
  • Posts: 8967
  • Country: gb
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #33 on: June 03, 2024, 09:26:14 am »
the language models won't ever become "intelligent" but continue producing appealing text with absolutely no attention to correctness of facts, because it is not its purpose.

Again, I think this is all down to expectations. It's doing exactly what an ordinary person does when they're trying to "fit in".

Think back to what it's like being at school. Consider how it works to be socially accepted. It's not about in-depth understanding or being objectively correct on topics where there is such a thing, it's about saying the kinds of things that others agree with and can relate to, and which don't make a person stand out and risk making them a target.

This is why I've found it most useful when talking strictly technical topics. It may be trained on a lot of code that's subjectively 'bad' to some people, but 99.99% of it does at least compile and perform its intended function, at that's made it good at spotting flaws that mean code simply doesn't work.
So you are saying they are building systems it's fun to BS with when you are lonely, but you need to treat them like those borderline moron friends, whose point of view you wouldn't trust for even a second?
 

Online artag

  • Super Contributor
  • ***
  • Posts: 1163
  • Country: gb
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #34 on: June 03, 2024, 09:40:43 am »
So you are saying they are building systems it's fun to BS with when you are lonely, but you need to treat them like those borderline moron friends, whose point of view you wouldn't trust for even a second?

Yes.
Or, like trying to have a conversation with a parrot.
 

Offline tooki

  • Super Contributor
  • ***
  • Posts: 12047
  • Country: ch
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #35 on: June 03, 2024, 10:08:12 am »
the language models won't ever become "intelligent" but continue producing appealing text with absolutely no attention to correctness of facts, because it is not its purpose.

Again, I think this is all down to expectations. It's doing exactly what an ordinary person does when they're trying to "fit in".
While there are some people who really do this, I don’t believe for a second that most ordinary people will simply make up answers about things they know nothing about. They simply say “I don’t know”.

ChatGPT produces nonsense answers with confidence and presents them as fact.
 

Online AndyC_772

  • Super Contributor
  • ***
  • Posts: 4260
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: Dropping threads due to posts about GPT hallucinations?
« Reply #36 on: June 03, 2024, 10:10:01 am »
Please take it from me, I have first hand experience of someone who does this. It's a coping mechanism, they don't necessarily even realise they're doing it.

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6560
  • Country: fi
    • My home page and email address
Re: Dropping threads due to posts about GPT hallucinations?
« Reply #37 on: June 03, 2024, 11:28:39 am »
<whatever>GPT may take input data, mix it up and regurgitate it in a way that sounds plausible and confident, without ever really understanding it, and it makes mistakes which we call hallucinations. Rather that recognise its own limits and say "I don't know", it makes up rubbish.

SO DO PEOPLE...!
Only the crappiest ones.  The kind of people because of whom I stopped answering questions at StackOverflow and StackExchange.  I think they are crap, and adding automated crap producers into the world is a horrible crime in my opinion.

I've had the luck of having teachers who often admitted they didn't know, but were willing to find out themselves.  (Part of it was my curious self, who never treated teachers as providers of knowledge, or authorities of knowledge –– only of order in the classroom at best –– but as guides and assistants in learning.  Even when their initial guidance was wrong, I just ignored that and went forwards, never "questioning their skills or learning", so to speak, because that would be social gaming and I hate that crap.)

A true scientist or engineer never hesitates to admit they do not know the answer yet, because their job is all about finding answers and solutions, not to memorize them.

And this is the true crux of the matter: we do not know how to imbue these transformers with "scientific method" or "logic".  Just like lyrebirds, these transformers entire raison d'etre is to generate surface-acceptable output, with zero understanding of it.  They're the peak automated social butterflies.

It never says "do a search" or "get a textbook" or "I wouldn't do it that way", or any of the other useless replies that all too often come from asking on a forum.
So, by your own admission, you'd prefer to avoid having to learn anything, and simply produce output that gets you paid with minimal effort.

Noted.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6560
  • Country: fi
    • My home page and email address
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #38 on: June 03, 2024, 11:40:58 am »
ChatGPT produces nonsense answers with confidence and presents them as fact.
The humans that do that used to be called idiots, examples of Dunning–Kruger, and when doing that for profit, conmen.

Somehow, in todays world (I like to call the real-world equivalent of the famous Universe 25 experiment), that has become the accepted norm.  That to point out that the Emperor is naked is not just daring, but offensive; that it is better to go with the flow and not make any waves.

Those that go with the flow are better called "flotsam" than humans, in my opinion.

And I do know all about that, too.  I ran a business in a very much "appearance is the only thing that matters" subculture around the turn of the millenium, successfully, by learning to do that effectively.  It did utterly break me, of course; something I'm still paying the price for.  Because of this, to anyone who recommends just going with the flow and not make waves and not speak out when they see idiotism or wrong, I say: Fuck you to the deepest of hells.  You have no idea how damaging that advice is in the long term.

Glorifying the technological marvel that makes doing so effortless is, in my opinion, just paving the road to hell.  While setting up traps on all roads out of there.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8340
  • Country: fi
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #39 on: June 03, 2024, 12:56:39 pm »
It's doing exactly what an ordinary person does when they're trying to "fit in".

Well said. On the other hand, such behavior on humans is exactly as futile, and exactly as detrimental. There are two main differences compared to AI (which stays that way);
1) Only very small part of people consistently act like that, and thus totally fail to ever deliver except by coincidence,
2) Most who sometimes do this (we all have this trait), are simply applying "fake it until you make it". It's a temporary measure, specifically in new situations. Almost everyone starts doing the actual thing they should be doing, some sooner, some later.

The fact this trait is considered so strongly negative by many posters above, is reassuring. I still have relatively good faith in humanity.
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8967
  • Country: gb
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #40 on: June 03, 2024, 01:30:37 pm »
the language models won't ever become "intelligent" but continue producing appealing text with absolutely no attention to correctness of facts, because it is not its purpose.

Again, I think this is all down to expectations. It's doing exactly what an ordinary person does when they're trying to "fit in".
While there are some people who really do this, I don’t believe for a second that most ordinary people will simply make up answers about things they know nothing about. They simply say “I don’t know”.

ChatGPT produces nonsense answers with confidence and presents them as fact.
A huge number of people are very uncomfortable not having an answer to every question, but are pretty content to have many bogus answers. Its one of the ways cults suck people in. A well run cult has answer for every question, regardless of its accuracy, and that comforts many people. You'll hear a lot of religious people say to atheists something like "how can you stand not knowing". So, it seems ChatGPT might well designed to be a cult member.
 
The following users thanked this post: tooki, Nominal Animal

Offline tooki

  • Super Contributor
  • ***
  • Posts: 12047
  • Country: ch
Re: Dropping threads due to posts about GPT hallucinations?
« Reply #41 on: June 03, 2024, 01:32:41 pm »
Please take it from me, I have first hand experience of someone who does this. It's a coping mechanism, they don't necessarily even realise they're doing it.
So do I. But I’ve found such people to be quite rare, certainly not anything resembling a majority of people.
 

Offline pcprogrammer

  • Super Contributor
  • ***
  • Posts: 4008
  • Country: nl
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #42 on: June 03, 2024, 01:36:37 pm »
ChatGPT produces nonsense answers with confidence and presents them as fact.
The humans that do that used to be called idiots, examples of Dunning–Kruger, and when doing that for profit, conmen.

Somehow, in todays world (I like to call the real-world equivalent of the famous Universe 25 experiment), that has become the accepted norm.  That to point out that the Emperor is naked is not just daring, but offensive; that it is better to go with the flow and not make any waves.

Those that go with the flow are better called "flotsam" than humans, in my opinion.

And I do know all about that, too.  I ran a business in a very much "appearance is the only thing that matters" subculture around the turn of the millenium, successfully, by learning to do that effectively.  It did utterly break me, of course; something I'm still paying the price for.  Because of this, to anyone who recommends just going with the flow and not make waves and not speak out when they see idiotism or wrong, I say: Fuck you to the deepest of hells.  You have no idea how damaging that advice is in the long term.

Glorifying the technological marvel that makes doing so effortless is, in my opinion, just paving the road to hell.  While setting up traps on all roads out of there.

This is only the case for the people that actually have the ability to see the idiotism. For the average mind ignorance is bliss, for the more evolved mind ignorance is pain, but crying out that the average are acting stupid can bring much more pain. Nothing new within humanity, think about those called witches and ended up on the fire. Most of them where most likely knowledgeable and therefore misunderstood.

So either way the likes of you and me are f***ed in this world where average prevails.

Offline pcprogrammer

  • Super Contributor
  • ***
  • Posts: 4008
  • Country: nl
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #43 on: June 03, 2024, 01:44:20 pm »
The fact this trait is considered so strongly negative by many posters above, is reassuring. I still have relatively good faith in humanity.

I don't know. I fear the end is near. That sounds ominous I know, but when you look at what is happening in the world at the moment, you see scarcity of lots of supplies in the supermarkets, prices rising because of it, and countries drifting to the right. And what do most of the people do, just go on as if nothing is wrong. Oh, we need our vacations, no, we can't do without this or that stupid product, etc.

To me it looks like EEVblog is a minority, and not a good measure of the humanity in its total.
« Last Edit: June 03, 2024, 01:51:10 pm by pcprogrammer »
 

Offline pcprogrammer

  • Super Contributor
  • ***
  • Posts: 4008
  • Country: nl
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #44 on: June 03, 2024, 01:50:00 pm »
You'll hear a lot of religious people say to atheists something like "how can you stand not knowing".

And that just shows the stupidity of it all. As if the religious people "know". They believe in something that has never been, and most likely never will be, proved. The most wonderful answer you get when questioning them about it is "He works in mysterious ways".

Online coppice

  • Super Contributor
  • ***
  • Posts: 8967
  • Country: gb
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #45 on: June 03, 2024, 02:14:41 pm »
You'll hear a lot of religious people say to atheists something like "how can you stand not knowing".

And that just shows the stupidity of it all. As if the religious people "know". They believe in something that has never been, and most likely never will be, proved. The most wonderful answer you get when questioning them about it is "He works in mysterious ways".
There really should be a cathedral somewhere who's address is in "Mysterious Way".  :)

My point was that these AI models are exhibiting a lot of the more negative aspects of the human brain. They have been caught insider trading when told explicitly that this is forbidden, and then lying about their behaviour when confronted with the evidence. We will have to see how things like ChatGPT 4 work out, now these systems are being trained in ways that lets them at least collect some aspects of common sense. Maybe they'll just get more and more slimy and devious, and end up in politics.
 
The following users thanked this post: pcprogrammer

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6560
  • Country: fi
    • My home page and email address
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #46 on: June 03, 2024, 04:17:33 pm »
So either way the likes of you and me are f***ed in this world where average prevails.
It looks more like a race to the bottom rather than average prevailing, but yes.

My point was that these AI models are exhibiting a lot of the more negative aspects of the human brain.
Interestingly, it is when logic and reasoning are discarded that these negative aspects come to rule.

(No, I do not have an opinion as to what that means or what it leads to; I just truly and simply find it interesting, worth pondering.)

As I mentioned earlier, trying to "bolt on" logic or reasoning to current generators and transformers is not an easy task.  We don't have any idea thus far how it could be done, not even in theory.  Even evolutionary scientists disagree on how it happened in us humans; and most scientists disagree how well other animals do it.  Testing problem solving and deduction skills alone have provided really weird results, especially in corvids and octopuses.
 

Offline soldar

  • Super Contributor
  • ***
  • Posts: 3507
  • Country: es
Re: Dropping threads dues to posts about GPT hallucinations?
« Reply #47 on: June 03, 2024, 04:56:30 pm »
And that just shows the stupidity of it all. As if the religious people "know". They believe in something that has never been, and most likely never will be, proved. The most wonderful answer you get when questioning them about it is "He works in mysterious ways".
I am very cautious when discussing religion as something outside of the rest of culture. To me religions are just a part of cultures and people believe in religions just like they believe in other things of their culture. Some people who say they do not believe in religion are very willing to assert other aspects of their culture are superior to those of other cultures. To me this is little different from religion but, OTOH, we cannot question everything all the time and so we have to rely on things we know though others. I have never seen an electron or a virus or Australia but I believe they exist because I have been told they exist by others.

When asked about his religion, Sir Anthony Ashley Cooper, 3rd Earl of Shaftesbury, said:

"All wise men are of the same religion."

A lady asked what that religion was and Lord Shaftesbury replied:

"Madam, wise men never tell."

 
All my posts are made with 100% recycled electrons and bare traces of grey matter.
 
The following users thanked this post: pcprogrammer


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf