Author Topic: Why offtopic is going to hurt us all...  (Read 3397 times)

0 Members and 1 Guest are viewing this topic.

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20770
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why offtopic is going to hurt us all...
« Reply #25 on: October 06, 2024, 09:52:24 pm »
For technical problems I'm finding myself using ChatGPT a lot more now.  Maybe around 25-30% of the time.  I would previously dive deep into the annals of Google, Stack Overflow etc to get an answer but I don't need to any more.  It's great.  Of course, you have to be wary of the LLM bullshitting you... but it's not like the internet isn't full of bullshit either.
I think its more realistic to say Google have trashed their system to the point where even a flaky piece of garbage, like ChatGPT, can do better.
I don't think so. I asked that garbage LLM to give me some ICs with a specific feature. And it invented non-existing features for ICs with slightly similar functions. You ask it what's the second biggest capital in Europe, and it tells you that it's Istanbul. And you call it out on it's bullshit, it apologizes and creates some more bullshit.

And to drive home the point to tom66, a conventional search wouldn't create such features for an imaginary IC. It might irrelevantly indicate an existing IC that had similar features - very different.

The "apologise and emit more bullshit" is what you expect from non-responsive customer service helplines (service in the way bulls service cows, and help in the sense of fob off).
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 7054
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: Why offtopic is going to hurt us all...
« Reply #26 on: October 06, 2024, 09:53:23 pm »
Never try to get ChatGPT to do something original. It simply can't.

(emphasis mine)

Q.E.D.

Yes - My bad, I edited the post to correct this error.  The BBCode quote formatting caught me out as I cropped your quote out from the inner quote in the wrong order, incorrectly attributing you to the inner quote.

(Perhaps that was a human hallucination?  :))

No part of my post denies the reality of LLM hallucinations.  But I contend that LLMs are still very useful and better than Google in many respects.  I'm well aware of the lawyer case - that's quite a good one.  But, that doesn't make LLM's useless or even bad.  Like all tools, they have their use cases.   Yes, I suppose you could describe them as statistical bullshit models, that have been trained to not bullshit so much.  But so are humans :).
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20770
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why offtopic is going to hurt us all...
« Reply #27 on: October 06, 2024, 10:03:00 pm »
Never try to get ChatGPT to do something original. It simply can't.

(emphasis mine)

Q.E.D.

Yes - My bad, I edited the post to correct this error.  The BBCode quote formatting caught me out as I cropped your quote out from the inner quote in the wrong order, incorrectly attributing you to the inner quote.

(Perhaps that was a human hallucination?  :))

No part of my post denies the reality of LLM hallucinations.  But I contend that LLMs are still very useful and better than Google in many respects.  I'm well aware of the lawyer case - that's quite a good one.  But, that doesn't make LLM's useless.  Like all tools, they have their use cases.

Of course, but what are the tools' limitations, therefore what are valid/invalid use cases, and what is the consequence of people using them for invalid use cases.

LLMs have been used to interpret medical scans (e.g. XRays) to determine the likelihood that treatment will be successful. If unlikely, treatment is denied.

Unfortunately LLMs cannot indicate why they come to such decisions. In one case it was eventually determined that they were detecting the font on the medical scan! One hospital had poor results because it was in a poor area and poorly funded, and used font A. Another hospital with better results in a more affluent area used font B.

The LLM thus - as a consequence of its fundamental operation - perpetuated bias by unwittingly assessing hospitals rather than patients. Patient' lives are at risk. Your families' lives may be at risk.

Simple searches couldn't make that mistake. People could explain their reasoning, so that the validity could be determined.
« Last Edit: October 06, 2024, 10:05:57 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 7054
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: Why offtopic is going to hurt us all...
« Reply #28 on: October 06, 2024, 10:07:23 pm »
I would not use an LLM for anything that involved life safety or where the failure of that LLM's output led to significant economic harm to my employer/clients.   

I will use an LLM where the failure of the output will be e.g. slightly longer to debug a problem (so I only wasted a little bit of time), or otherwise of little to no consequence.  So far, my experience has been a net gain in time, in other words, while I may have spent 30 minutes before Googling an obscure linker or compiler error, I can paste a subset of code into ChatGPT and get an answer for why gcc has given me an explicit error in under 30 seconds.

As a more specific example, C++ templating errors can sometimes be hard to debug and gcc can produce a huge number of errors due to one missed declaration.  I am still somewhat new to C++, more of a C guy, so I have found the breakdown of causation for the errors to be very useful and in the majority of cases (9-out-of-10, roughly?) it has been useful.

As for explaining how they got there, look at GPT-o1 preview.  It is capable of reasoning its way to a solution.  And when it makes a mistake, you can see the mistake in the reasoning, so it's not a post-hoc reconstruction of the reasoning path, if that makes any sense.
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7988
  • Country: nl
  • Current job: ATEX product design
Re: Why offtopic is going to hurt us all...
« Reply #29 on: October 06, 2024, 10:22:36 pm »
For technical problems I'm finding myself using ChatGPT a lot more now.  Maybe around 25-30% of the time.  I would previously dive deep into the annals of Google, Stack Overflow etc to get an answer but I don't need to any more.  It's great.  Of course, you have to be wary of the LLM bullshitting you... but it's not like the internet isn't full of bullshit either.
I think its more realistic to say Google have trashed their system to the point where even a flaky piece of garbage, like ChatGPT, can do better.
I don't think so. I asked that garbage LLM to give me some ICs with a specific feature. And it invented non-existing features for ICs with slightly similar functions. You ask it what's the second biggest capital in Europe, and it tells you that it's Istanbul. And you call it out on it's bullshit, it apologizes and creates some more bullshit.

And to drive home the point to tom66, a conventional search wouldn't create such features for an imaginary IC. It might irrelevantly indicate an existing IC that had similar features - very different.

The "apologise and emit more bullshit" is what you expect from non-responsive customer service helplines (service in the way bulls service cows, and help in the sense of fob off).
There are better LMMs like bing, that will actually use current information on the web to give you a correct answer.
And we argue that an LMM is good or not, while they are reducing the processing power in the background. So it uses less resources to give an answer. ChatGPT has been declining since the last year considerably.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20770
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why offtopic is going to hurt us all...
« Reply #30 on: October 06, 2024, 10:55:25 pm »
I would not use an LLM for anything that involved life safety or where the failure of that LLM's output led to significant economic harm to my employer/clients.   

Good. But not relevant to whether LLMs hallucinate bullshit/rubbish.

But that won't stop other people/companies. Even without LLMs, they hide scrutiny of their product's output under the veil of commercial secrecy. LLMs fit that behaviour perfectly. (Examples: US courts "should this accused/convicted person be put in jail")

LLMs should be avoided until they can indicate why they emitted a result. That's been a problem for 40 years, and is still a "active research topic". Igor Aleksander's 1983 WISARD, effectively the forerunner of today's LLMs, demonstrated a key property of modern LLMs: you didn't and couldn't predict/understand the result it would produce, and they can't indicate their "reasoning". WISARD correctly distinguished between cars and tanks in the lab, but failed dismally when taken to Luneberger Heath in north Germany. Eventually they worked out the training set was tanks under grey skys and car adverts under sunny skies.

Different fonts, anyone?

LLMs give The Answer, and the lazy/ignorant won't question that.
Conventional search gives a Set (or Bag) of related answers, and you must examine and select them.

Quote
I will use an LLM where the failure of the output will be e.g. slightly longer to debug a problem (so I only wasted a little bit of time), or otherwise of little to no consequence.  So far, my experience has been a net gain in time, in other words, while I may have spent 30 minutes before Googling an obscure linker or compiler error, I can paste a subset of code into ChatGPT and get an answer for why gcc has given me an explicit error in under 30 seconds.

As a more specific example, C++ templating errors can sometimes be hard to debug and gcc can produce a huge number of errors due to one missed declaration.  I am still somewhat new to C++, more of a C guy, so I have found the breakdown of causation for the errors to be very useful and in the majority of cases (9-out-of-10, roughly?) it has been useful.

As for explaining how they got there, look at GPT-o1 preview.  It is capable of reasoning its way to a solution.  And when it makes a mistake, you can see the mistake in the reasoning, so it's not a post-hoc reconstruction of the reasoning path, if that makes any sense.

I first used C in ~1982, when there was only two books on the language.

In 1988 I kicked the tyres with Objective-C and C++. I rapidly decided C++ was to difficult and awkward to be productive; Objective-C was much better. (Objective-C is Smalltalk -GC +C-like syntax, and is the underpinning of all modern Apple products.)

In the 90s I occasionally revisited C++, before deciding "if C++ is the answer, you need to revisit your question".

IMNSHO, use C for low-level stuff, and a modern productive language for everything else. Pleasingly, many influential US bodies are making the same point, albeit more circumspectly)
« Last Edit: October 06, 2024, 11:04:12 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online langwadt

  • Super Contributor
  • ***
  • Posts: 4778
  • Country: dk
Re: Why offtopic is going to hurt us all...
« Reply #31 on: October 06, 2024, 11:05:57 pm »
For technical problems I'm finding myself using ChatGPT a lot more now.  Maybe around 25-30% of the time.  I would previously dive deep into the annals of Google, Stack Overflow etc to get an answer but I don't need to any more.  It's great.  Of course, you have to be wary of the LLM bullshitting you... but it's not like the internet isn't full of bullshit either.
I think its more realistic to say Google have trashed their system to the point where even a flaky piece of garbage, like ChatGPT, can do better.
I don't think so. I asked that garbage LLM to give me some ICs with a specific feature. And it invented non-existing features for ICs with slightly similar functions. You ask it what's the second biggest capital in Europe, and it tells you that it's Istanbul. And you call it out on it's bullshit, it apologizes and creates some more bullshit.

And to drive home the point to tom66, a conventional search wouldn't create such features for an imaginary IC. It might irrelevantly indicate an existing IC that had similar features - very different.

The "apologise and emit more bullshit" is what you expect from non-responsive customer service helplines (service in the way bulls service cows, and help in the sense of fob off).

https://abovethelaw.com/2024/02/airline-said-its-not-responsible-for-terrible-advice-from-its-own-customer-service-ai-bot-the-court-disagreed/
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15444
  • Country: fr
Re: Why offtopic is going to hurt us all...
« Reply #32 on: October 06, 2024, 11:13:43 pm »
https://abovethelaw.com/2024/02/airline-said-its-not-responsible-for-terrible-advice-from-its-own-customer-service-ai-bot-the-court-disagreed/

Interesting. I see two key points: good thing the court said no, but the other one is that the company definitely tried to pull this off. Showing that companies will definitely keep trying until they get new, more favorable laws through lobbying. The intent is there, it's just a matter of time.
 

Offline DavidAlfa

  • Super Contributor
  • ***
  • Posts: 6278
  • Country: es
Re: Why offtopic is going to hurt us all...
« Reply #33 on: October 06, 2024, 11:26:07 pm »
Because Google gathers and analyzes everything about you to provide the most BS results possible.
Same as YouTube, AliExpress...

I often do the searching in a incognito window... The difference is crazy sometimes.
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15444
  • Country: fr
Re: Why offtopic is going to hurt us all...
« Reply #34 on: October 07, 2024, 12:13:20 am »
There are better LMMs like bing, that will actually use current information on the web to give you a correct answer.

Bing uses OpenAI's GPT-4. It's been "customized" to be a better fit for search engine needs than ChatGPT, but is based on the same thing, really.

Currently, an alternative for those not willing to use MS stuff (at least directly) is Perplexity: https://www.perplexity.ai/
it provides references for all statements it makes. Not too bad.

Still, while these tools can save you some time, they are no full replacement for conventional search engines - indeed, the fact that they extract the information they "consider you'll find useful" is inherently biased. Conventional search engines (even when they push per-profile links, as most do these days unless you use private browsing) give you more varied search results. Which, again, sure can be a waste of time, but also the opportunity to find info you didn't directly think of (or that other people would not have asked for), and that's also a plus.

Because Google gathers and analyzes everything about you to provide the most BS results possible.
Same as YouTube, AliExpress...
I often do the searching in a incognito window... The difference is crazy sometimes.

True. Haven't really noticed for Aliexpress, as its search engine overall is pretty bad anyway.

But don't get the idea that LLM-based engines will be immune to that: this makes no sense, that's where the money is.
For now they are, more or less, although already biased. But the fact they do summarize info and give you only a small subset of the information that, again, it "thinks" is what you are after, is even more of a future opportunity to shove ads to you in ways that are much more subtle (and so will be much more expensive for advertisers).
« Last Edit: October 07, 2024, 12:15:42 am by SiliconWizard »
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4541
  • Country: nz
Re: Why offtopic is going to hurt us all...
« Reply #35 on: October 07, 2024, 01:42:21 am »
You ask it what's the second biggest capital in Europe, and it tells you that it's Istanbul.

Isn't that correct, or at least arguable? It's complicated.

It's 100% certain that Moscow and Istanbul are the two largest European cities. They're close enough that different definitions (boundaries, commuters...) probably put them in different orders. Istanbul is larger than Moscow in many measures, but you can also argue that only about 70% of the population is geographically in Europe, 30% in Asia. Even with that, the part in Europe is still probably larger than Paris or London. Also, 100% of Turkey is in the European Customs Union.

In maybe 2016 (when I was living there) the mobile phone companies got together and said there were something like 25 million active SIM cards in Moscow during business hours but a LOT fewer at night. With its extensive commuter train network I suspect Moscow has more people living outside the city but working inside it than Istanbul. For example, one of my co-workers in northern central Moscow (Mar'ina Roshcha) commuted daily about 90 km from south of not only Moscow city but also Moscow region, in Kaluga region. One that I *know* of.
 

Offline DavidAlfa

  • Super Contributor
  • ***
  • Posts: 6278
  • Country: es
Re: Why offtopic is going to hurt us all...
« Reply #36 on: October 07, 2024, 01:56:28 am »
It's not that AliExpress search engine is bad, they're intentionally showing all your past searches and visited items to hopefully trigger your interest.
In incognito mode the system can't relate you so it will show purely what you went for.

I'll repeat my Ali "incident" where I clicked a giant dick to see what the comments would be (No homo, just curious).
I was spammed for months with forearm-sized weenies, I couldn't open the app in public, any simple search  like "battery" would interleave those things every now and then.
I learned to not be curious in AliExpress  :)
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Why offtopic is going to hurt us all...
« Reply #37 on: October 07, 2024, 02:07:43 am »
Never try to get ChatGPT to do something original. It simply can't.

You say that like the human creative process doesn't involve large amounts of copying, mashing up, and minor variations. :popcorn:


LLMs assemble plausible sequences of words into grammatically correct sentences. "Plausible" is some vague form of "average" of the words that have previously been found in similar relationship to each other and the question.

It's funny you use a negative tone to describe these systems... yet hedged in such a way that precisely the same definition applies to all human speech. :)

You allude to this, of certain groups, but that seems rather unfair when the net you've cast is far more inclusive.

The real insight seems to be by matter of degree, for which we need some way to measure bullshit, but we can imagine that such a method exists.  Now we find a different truth: humans in general do it when expedient (or indeed necessary for rhetorical purpose, as often the case for the highlighted groups, but also more broadly, or relatedly, narcissism finds great value in gaslighting, and by extension, domains where narcissists feature prominently), and LLMs do it less and less as time goes on -- or at least, as egregiously.

A different perspective I've had is, compare a proper PC to an embedded CPU of comparable processing power, but far fewer peripherals, memory constrained.  Worse still, one that lacks hardware virtualization / memory mapping.  There are a lot of tasks they are comparable at, but anything that requires enough memory, will at the very least be much harder to compose (e.g. more code (and execution time) to bring in external storage, rather than everything being memory mapped to begin with), and at some point will have to turn into a VM (or plural..!) at considerable cost to execution speed -- or if you strip down a lot of the code, compromises on feature set, or accuracy (or even if both).  Suppose we have an application that requires some function calculated in given maximum time; to the extent that computations can be approximated, and for sufficient degree of restriction, approximations must be employed by the limited system.  In terms of linguistic output, we might consider this bullshitting, an approximation that may or may not be accurate to underlying (available, if given enough time or access) information.

Comparing average human to LLM vocabulary, clearly the LLM wins, trained on basically every language (including programming and markup).  It's "wider" at this level.  Maybe not to ultimate human ability, but I mean the average human is barely trained in one or two languages (depending how strict/technical one makes "well trained").  Perhaps, to the extent we can map vocabulary to processing power, this is the case of the embedded system being faster (e.g. compare a typical STM32F4 to a 486 or early Pentium).

Comparing average human to LLM coherency, semantics, synthesis, I would argue LLMs are already superior on several fronts.  A wide variety of simple, shallow, space-filling, or formatting (e.g. as standard news format) tasks are essentially solved problems. At least to the mediocre quality of copy that passes for the average case in industry.

The present limitation on buffer length, and depth of training at that length (I would assume--?), limits semantics in a similar way as low memory limits performance of a computer: without external resources, or time to access them, or some such limitation like that, some approximation is required, and thus error -- in this case, unexpected, inconsistent or incoherent text.  It seems clear from progression in LLM development, that this is the hardest limitation, and is analogous with memory limitations.

And, not that it's an insurmountable problem, you could probe an LLM repeatedly and stitch together a more complete and knowledgeable overall response; but that takes more effort, and quickly becomes not worthwhile, just as running a VM vs. getting a better CPU/RAM/system overall, is.  Conversely, we could liken the coherence length to various human responses, pathological or otherwise; brains with amnesia for one, but even just lack of attention (literally as in ADHD), might have similar experiences.  And there too, one can augment their brain with external tools (e.g., keeping notes to remember long-term information), to varying degrees of success.

I suspect the amount of memory they're using for LLMs, let alone processing power, is already more than enough for the task, the problem is we don't know how to solve the problem algorithmically, so it's brute forcing tensors and training data to do a merely poor job of it.  And, which doesn't scale well, or at least AFAIK it can't be directly sized up (but has to be fully retrained; not that that might matter much, if you're increasing model size/scope by order of magnitude, it's not like you're saving much training effort).

That, and the lack of ways to integrate semantic information -- and of outputs, to interface with the wider world (with good reason, but they're already being put into places they shouldn't, and people will simply try more as time goes on..).

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: tom66

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4541
  • Country: nz
Re: Why offtopic is going to hurt us all...
« Reply #38 on: October 07, 2024, 02:18:11 am »
I first used C in ~1982, when there was only two books on the language.

Similar. I don't recall whether it was 82 or 83. C was very much a minority language on VAX/VMS, which didn't integrate with the native languages (MACRO, Bliss, Fortran, Cobol, Pascal), though we did have it as part of the "Eunice" BSD emulator/compatibility environment. [1] I actually used BCPL more. My first real use of C & Unix was on the Zilog System 8000 which arrived late 83 or early 84.

Quote
In 1988 I kicked the tyres with Objective-C and C++. I rapidly decided C++ was to difficult and awkward to be productive; Objective-C was much better. (Objective-C is Smalltalk -GC +C-like syntax, and is the underpinning of all modern Apple products.)

Kind of similar. I bought MSDOS Zortech C++ [2] (on 9999 floppies) just to use it in an emulator on my Mac (IIcx, I guess). I played with that until Apple had their own CFront-based C++ in MPW.

Quote
In the 90s I occasionally revisited C++, before deciding "if C++ is the answer, you need to revisit your question".

Early C++ was good. Certainly at the CFront 2.0 / ARM stage. It had a lot of simply "better C" features which made it highly desirable. And when Apple picked up NeXT you could use "Objective-C++" and mix and match as you wanted.

Modern C has picked up all or almost all the "better C" parts of C++. And the C++ committee has gone mad.


[1] I once hacked superuser privileges on the university VAX when I noticed that after a Eunice upgrade the ps program (which needed superuser rights) had been installed without disabling the ability to attach a debugger. I set a breakpoint, poked code for setPriv(-1) into memory, ran it, forked a shell. BOOM.

[2] Walter Bright is only four years older than me, but I considered him some kind of a god. I first used his "Empire" game on the VAX at university, then Zortech C++ which turned into Symantec C++ (which the company destroyed with v6, but fortunately Metrowerks came along at just the right moment). It's been shocking the last month or two seeing Walter struggling to understand ARMv8-A and try to port the D compiler to it. Had he been x86-only the last 40 years?
 

Offline shabaz

  • Frequent Contributor
  • **
  • Posts: 445
Re: Why offtopic is going to hurt us all...
« Reply #39 on: October 07, 2024, 03:33:22 am »
Mini case study: I recently solved a problem initially in a non-ChatGPT-oriented way because I sometimes forget to give that a shot.

The problem was non-trivial (not going to write in detail, but, long story short, it concerns calculating results from geographic-like data. Google and raw effort took me days to get anywhere, although I did make progress and I coded up a possible solution. My solution consumes several Mbytes of storage and needs an excessive amount of additional storage if I want to increase performance. It works, but it's not pretty.

ChatGPT, on the other hand, pointed me toward things I would not have found by myself or would have thought too hard to implement, in the very first query to it. And then, with some experimenting, I realized I could scale resources down immensely, to a couple of hundred kbytes, without losing much precision! It will actually provide better performance than my several Mbyte solution, thankfully, but also annoyingly!

 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15444
  • Country: fr
Re: Why offtopic is going to hurt us all...
« Reply #40 on: October 07, 2024, 03:34:34 am »
I remember my first hands-on experience with C++ was with Think C on a Mac. That was rather simple though and very far from the C++ language we have now.

Modern C has picked up all or almost all the "better C" parts of C++. And the C++ committee has gone mad.

Yes. The thing (really?) missing in C is adding namespaces - although explicit namespaces such as in C++ do bite IMO. But people often use classes in C++ (at least for embedded dev) as a replacement for 'modules'. And the modules introduced in "recent" C++ are a joke. But developers are used to using classes as modules anyway, so they probably don't care all that much. In C, having neither kind of sucks, as it requires using pretty long identifiers (unless you don't care about name clashes and thus about anyone else that would use your code but you).

C++ is an all-you-can-eat buffet. It looks like its only goal is to add all features it can from other languages, in order to silence developers saying "but C++ doesn't have this or that". I wouldn't be surprised if C++ added lifetimes and ownership management in future standard revisions. If it already has, I must have missed it and will have a good laugh.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4541
  • Country: nz
Re: Why offtopic is going to hurt us all...
« Reply #41 on: October 07, 2024, 05:29:34 am »
precisely the same definition applies to all human speech. :)

Not all.

While many people has no doubt uses variables named "E", "m", "c" in mathematical formulas, would an LLM ever arrange them as "E = mc2", along with justification, before the first human did that?

I could also propose as examples certain small contributions I've made myself, such as the orc.b instruction in the RISC-V Zbb extension which is, to the best we've been able to research it, the first time an instruction with those semantics (and a family of related but not (yet) ratified variations) had been either proposed or implemented in a computer.

Certainly almost all humans do nothing original ever, and the remainder very rarely do or say anything original. But I think for LLMs it's a big fat precise zero.
« Last Edit: October 07, 2024, 10:44:22 am by brucehoult »
 

Offline Andy Chee

  • Super Contributor
  • ***
  • Posts: 1250
  • Country: au
Re: Why offtopic is going to hurt us all...
« Reply #42 on: October 07, 2024, 08:20:30 am »
It's funny you use a negative tone to describe these systems... yet hedged in such a way that precisely the same definition applies to all human speech. :)

Not all human speech.

For example, you never see a LLM mistake the plural of "foot" as "foots" (feet), or the plural of "boot" as "beet" (boots).  The reason is obvious, such mistakes do not exist in human adult nor the LLM vocabulary.

HOWEVER

Such mistakes are quite common in childhood learning.  Given that such mistakes do not occur in adult language (or LLM), kids cannot be exposed to incorrect words.  Without exposure, how did kids learn these language mistakes in the first place?

It suggests that there is more to human learning than exposure to training material.  For LLMs, exposure to training material is it.
« Last Edit: October 07, 2024, 08:24:36 am by Andy Chee »
 

Online langwadt

  • Super Contributor
  • ***
  • Posts: 4778
  • Country: dk
Re: Why offtopic is going to hurt us all...
« Reply #43 on: October 07, 2024, 09:01:11 am »
It's not that AliExpress search engine is bad, they're intentionally showing all your past searches and visited items to hopefully trigger your interest.
In incognito mode the system can't relate you so it will show purely what you went for.

and show you a 90% discounted price, max buy one
 

Online ebastler

  • Super Contributor
  • ***
  • Posts: 7157
  • Country: de
Re: Why offtopic is going to hurt us all...
« Reply #44 on: October 07, 2024, 09:01:47 am »
It suggests that there is more to human learning than exposure to training material.  For LLMs, exposure to training material is it.

That argument does not seem logical to me. All human learning is based on outside stimuli (training material) as well. What you described before is related to the way that training material is processed internally -- the mind looking for patterns and "rules" which it can apply to similar problems.

Whether or not an AI model is capable of the same internal processing, looking for patterns it can generalize, it totally independent from the fact that it is exposed to training material. I don't see why that should not be possible, and would expect that it already happens to some extent in today's LLMs. It's an interesting challenge to think up some experiments to prove or disprove that!
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7988
  • Country: nl
  • Current job: ATEX product design
Re: Why offtopic is going to hurt us all...
« Reply #45 on: October 07, 2024, 09:24:05 am »
It suggests that there is more to human learning than exposure to training material.  For LLMs, exposure to training material is it.

That argument does not seem logical to me. All human learning is based on outside stimuli (training material) as well. What you described before is related to the way that training material is processed internally -- the mind looking for patterns and "rules" which it can apply to similar problems.

Whether or not an AI model is capable of the same internal processing, looking for patterns it can generalize, it totally independent from the fact that it is exposed to training material. I don't see why that should not be possible, and would expect that it already happens to some extent in today's LLMs. It's an interesting challenge to think up some experiments to prove or disprove that!
The difference between human learning and machine learning is very clear to me. When you try to feed new information to a human, the human will first evaluate this information assigns a value to it, discriminates. A LLM will treat all information equally. So humans assign value on the forward path, a feedback based learning LLM on the feedback path.
This means that humans are effected with different issues when learning.
For example when a human is meeting disinformation, they might completely ignore it. Or if they received misinformation their entire life, they might ignore the truth. The machine will integrate both if the training data contains misinformation.
Also LMM's suffer from human interaction on the input. You ask google's geminy to show you a fireman, and the LMM is actually asked to show you a "fireman and woman of diverse ethnic background". Or it calls you a racist in a thousand words. So when asking a LLM coming from a company, you have to evaluate the company first if it's trying to push you some agenda. Same with people BTW. There are people that consistently will give you misinformation in their responses.
 
The following users thanked this post: SteveThackery

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 20770
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why offtopic is going to hurt us all...
« Reply #46 on: October 07, 2024, 09:33:32 am »
I first used C in ~1982, when there was only two books on the language.

Similar. I don't recall whether it was 82 or 83. C was very much a minority language on VAX/VMS, which didn't integrate with the native languages (MACRO, Bliss, Fortran, Cobol, Pascal), though we did have it as part of the "Eunice" BSD emulator/compatibility environment. [1] I actually used BCPL more. My first real use of C & Unix was on the Zilog System 8000 which arrived late 83 or early 84.

Quote
In 1988 I kicked the tyres with Objective-C and C++. I rapidly decided C++ was to difficult and awkward to be productive; Objective-C was much better. (Objective-C is Smalltalk -GC +C-like syntax, and is the underpinning of all modern Apple products.)

Kind of similar. I bought MSDOS Zortech C++ [2] (on 9999 floppies) just to use it in an emulator on my Mac (IIcx, I guess). I played with that until Apple had their own CFront-based C++ in MPW.

Quote
In the 90s I occasionally revisited C++, before deciding "if C++ is the answer, you need to revisit your question".

Early C++ was good. Certainly at the CFront 2.0 / ARM stage. It had a lot of simply "better C" features which made it highly desirable. And when Apple picked up NeXT you could use "Objective-C++" and mix and match as you wanted.

Before using C++/Objective-C, I had kicked the tyres of Apple Smalltalk on a FatMac (glacially slow!). The quality, breadth and depth of the Smalltalk standard libraries was a stunning revelation. They just worked, and everybody used them, and multiple projects were easily "composed" into a single project where everything played well together.

Contrast that with C++: how many variants of String and Boolean were there? One per project, all incompatible. Container classes were a dream beyond imagining: roll your own, again and again. Keeping track of complex data structures as they were used in a codebase was impractical, except (potentially) with large amounts of copying. As for adding libraries from multiple vendors in a single project, that was a crapshoot and significant timesink.

Objective-C still had the GC problem, but it came with a set of usable standard libraries. Not in the Smalltalk class, but still a major step up from C. Mixing them with C++ was not something I have ever considered.

A decade later C++ was still deficient, and even the C++ committee couldn't understand what they had created.

The next language to have decent standard libraries and to enable libraries from multiple vendors to be used together without issues was, of course, Java.

Quote
Modern C has picked up all or almost all the "better C" parts of C++. And the C++ committee has gone mad.

C isn't much better, even though they have (quarter of a century late) added a memory model.

Remind me, is it possible to "cast away constness"? The ability to do that enables very subtle intermittent errors when code from multiple sources is used. OTOH if you can't do it, then some programs becomes more or less impossible, e.g. debuggers.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Andy Chee

  • Super Contributor
  • ***
  • Posts: 1250
  • Country: au
Re: Why offtopic is going to hurt us all...
« Reply #47 on: October 07, 2024, 09:33:38 am »
It suggests that there is more to human learning than exposure to training material.  For LLMs, exposure to training material is it.

That argument does not seem logical to me. All human learning is based on outside stimuli (training material) as well.
The logic is simple.  There is no specific outside stimuli of "foots" or "beet", so these words were never learnt via replicating outside stimuli.  A different learning process must've been involved, one that LLMs are currently incapable of.

That said, it might be a good thing that LLMs are not capable of this alternate learning path, as this alternate learning path is implicated in how biases are learnt e.g. beliefs in conspiracy theories or religious cults.
« Last Edit: October 07, 2024, 09:40:28 am by Andy Chee »
 

Online ebastler

  • Super Contributor
  • ***
  • Posts: 7157
  • Country: de
Re: Why offtopic is going to hurt us all...
« Reply #48 on: October 07, 2024, 09:36:46 am »
The difference between human learning and machine learning is very clear to me. When you try to feed new information to a human, the human will first evaluate this information assigns a value to it, discriminates. A LLM will treat all information equally. So humans assign value on the forward path, a feedback based learning LLM on the feedback path.
This means that humans are effected with different issues when learning.
For example when a human is meeting disinformation, they might completely ignore it. Or if they received misinformation their entire life, they might ignore the truth. The machine will integrate both if the training data contains misinformation.

I think you are underestimating LLMs. In my experience they do place the information they have gathered in the context where it occurred, and can make proper use of that context when working with the information. They don't "treat all information equally".
 

Offline tom66

  • Super Contributor
  • ***
  • Posts: 7054
  • Country: gb
  • Electronics Hobbyist & FPGA/Embedded Systems EE
Re: Why offtopic is going to hurt us all...
« Reply #49 on: October 07, 2024, 09:38:49 am »
I would not use an LLM for anything that involved life safety or where the failure of that LLM's output led to significant economic harm to my employer/clients.   

Good. But not relevant to whether LLMs hallucinate bullshit/rubbish.

But that won't stop other people/companies. Even without LLMs, they hide scrutiny of their product's output under the veil of commercial secrecy. LLMs fit that behaviour perfectly. (Examples: US courts "should this accused/convicted person be put in jail")

LLMs should be avoided until they can indicate why they emitted a result. That's been a problem for 40 years, and is still a "active research topic". Igor Aleksander's 1983 WISARD, effectively the forerunner of today's LLMs, demonstrated a key property of modern LLMs: you didn't and couldn't predict/understand the result it would produce, and they can't indicate their "reasoning". WISARD correctly distinguished between cars and tanks in the lab, but failed dismally when taken to Luneberger Heath in north Germany. Eventually they worked out the training set was tanks under grey skys and car adverts under sunny skies.

Different fonts, anyone?

LLMs give The Answer, and the lazy/ignorant won't question that.

It's possible to test the output of an LLM to the point where you can have very high confidence in the result.  This is what researchers are actively studying with the likes of o1 for instance.  Any fuzzy model will not have a guaranteed 'truthiness' to it, but the larger the dataset and the larger the test, you can gain progressive confidence in the accuracy of the model.

Once again, comparing what a neural network was capable of in 1983 to what it is capable of now is just wrong and misleading.  We've been over this.

And, LLMs can now show a reasoning path, which is unique to neural networks as far as I am aware.

Conventional search gives a Set (or Bag) of related answers, and you must examine and select them.

Conventional search gives the opportunity for averse agents to alter the input data (e.g. keyword hacking) to make their results more prominent.  Conventional search cannot distinguish between truth and fiction either.  Any user of GPT or search would do well to perform a sanity check - e.g. by checking against the competing engine, or just a general feeling of "sounds-right-ism".

I first used C in ~1982, when there was only two books on the language.

In 1988 I kicked the tyres with Objective-C and C++. I rapidly decided C++ was to difficult and awkward to be productive; Objective-C was much better. (Objective-C is Smalltalk -GC +C-like syntax, and is the underpinning of all modern Apple products.)

In the 90s I occasionally revisited C++, before deciding "if C++ is the answer, you need to revisit your question".

IMNSHO, use C for low-level stuff, and a modern productive language for everything else. Pleasingly, many influential US bodies are making the same point, albeit more circumspectly)

Good for you, but since we have about 200,000 SLoC of C++ in our product already, I have to write other code in C++.

I think C++ is maligned as a language.  It is not perfect but I much prefer it to the competitors like Java.  And it is considerably better than pure C for complex applications, you need only look at something like gcc's source which has to manually implement all sorts of data structures, memory management, refcounting, mutexing etc.  This is necessary of course because gcc is self-hosting, but it shows how cumbersome complex programs that manage data can be when written in C.   With C++ you can do something like this:

Code: [Select]
std::list<int> someFunction()
{
  std::list<int> myList;
  myList.push_back(7);
  myList.push_back(8);
  someOtherFunction(myList);
  // myList contains 7,8,9 and is freed after this function call if nothing is returned, including all of its inner children
  // but if I 'return' it, it's safe to do so and it won't be freed until its parent destroys it:
  return myList;
}

void someOtherFunction(std::list<int> &myList2)
{
   myList2.push_back(9);
   // if I tried to 'del' myList2 here I would have a compiler error
}


...and know that the memory safety of your list is guaranteed.  The list will be free'd when it is no longer used.  No risk of double freeing or pointer errors.  No need to manually refcount.

I'll assume you're hinting towards Rust there.   I have not used Rust enough to be sure of it, but I get the feeling that it could eventually replace C++.  I've used  Objective-C and didn't much like it because it differed too much from the conventional idioms of a programming language (but maybe I'm just awkward.) 
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf