Author Topic: AI and GOFAI: where are we, how we got here, are we where we think we are?  (Read 328 times)

0 Members and 1 Guest are viewing this topic.

Offline tggzzzTopic starter

  • Super Contributor
  • ***
  • Posts: 20144
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
https://www.technologyreview.com/2024/07/10/1094475/what-is-artificial-intelligence-ai-definitive-guide/

An interesting article about AI and GOFAI. The article doesn't have any position to push, simply notes the puff and doomsaying but doesn't fall for either. The overall tenor is more "this is where we are, how we got here, are we where we think we are?". Overall: long and wordy, but worth it.

Mentions a radio panel discussion in 1952 in which Turing offered his opinions. And notes we are still wresting with the concepts discussed.

Has quotes like
  • “For the life of me, I don’t understand why the industry is trying to fulfill the Turing test,” Skuler says. “Why is it in the best interest of humanity for us to develop technology whose goal is to dupe us?”
  • ...is betting that people can form relationships with machines that present as machines. “Just like we have the ability to build a real relationship with a dog,” he says. “Dogs provide a lot of joy for people. They provide companionship. People love their dog—but they never confuse it to be a human.”
  • It’s no surprise that “sparks of AGI” has also become a byword for over-the-top buzz. “I think they got carried away,” says Marcus, speaking about the Microsoft team. “They got excited, like ‘Hey, we found something! This is amazing!’ They didn’t vet it with the scientific community.” Bender refers to the Sparks paper as a “fan fiction novella.”
  • Margaret Boden was asked if she thought there were any limits that would prevent computers (or “tin cans,” as she called them) from doing what humans can do. “I certainly don’t think there’s anything in principle,” she said. “Because to deny that is to say that [human thinking] happens by magic, and I don’t believe that it happens by magic.”

The article doesn't mention "plausible word salad" nor "bullshitter", nor whether that passes the Turing test because that's what too much wetware emits.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: globoy

Offline Halcyon

  • Global Moderator
  • *****
  • Posts: 5819
  • Country: au
I'm honestly so sick of hearing about "AI" in everything. Yes, it has it's place, but you might as well replace anything with "Crypto" 10 years ago with "AI" and have the same impact.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6822
  • Country: nl
Transformers aren't neural networks, which is part of the problem. They are hard to adapt to online learning, not necessarily impossible but hard. Similarly, deep thought is very hard. A lot of the things which brains can handle with a fairly uniform soup of circuitry needs much more structure in transformers ... there is no emergence, just tinkering in a very fragile design space.

AGI is just the more politically correct term for Human Level AI. The ML tinkerers hate HLAI as a term though, so instead AGI.
 

Offline globoy

  • Regular Contributor
  • *
  • Posts: 213
  • Country: us
In a related vein, Ars Technica just posted an article discussing some of the theories of consciousness and how AI systems would have to evolve, if possible at all, to exhibit consciousness under those theories.

https://arstechnica.com/science/2024/07/could-ais-become-conscious-right-now-we-have-no-way-to-tell/
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9102
  • Country: gb
In a related vein, Ars Technica just posted an article discussing some of the theories of consciousness and how AI systems would have to evolve, if possible at all, to exhibit consciousness under those theories.
The problem with theories of consciousness is they can't define what they are trying to theorise about. A lot of prominent people studying the brain, like Anil Seth, completely dodge the issue of what constitutes consciousness, and just get on with studying how a brain actually works. That seems evasive, but I understand it. If you are looking at AI and consciousness, that inability to define what it is really makes any analysis of it in an AI context a farce. There is a similar issue with other mental issues, like free will. Any discussion about free will should be 90% defining what you mean by it. Most people try to discuss it from an unshared preconception of what they mean by it. This results in disagreements which probably wouldn't occur if they started off by tying down their definition.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9102
  • Country: gb
AGI is just the more politically correct term for Human Level AI. The ML tinkerers hate HLAI as a term though, so instead AGI.
The interesting thing about the term artificial general intelligence is it means the opposite of what it says. General intelligence is the social science's way of avoiding the term IQ, which has a certain stigma, but that is what they mean by it. The justification for the term is it covers good memory skills, good logical deduction, good linguistic skills and so on. Those are all essentially pattern matching with a well functioning memory system. That is, things AI is starting to do moderately well, However the totality of what we consider intelligence in humans is a lot more than IQ. IQ is basically reasoning. Many high IQ people with no common sense can reason themselves into all sorts of strange places. Some of the most self destructive people have a high IQ. At least 3 elements - IQ, industriousness, and what we might call common sense or wisdom - are needed for any meaningful concept of generalised intelligent behaviour.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf