If you think we reached an end apex, I am here to tell you we have just begun and we will progress further.
Today's AI is only at such a level of infancy that what we have today is like an ant compared to what's coming down the line.
I once had the opportunity to meet and attend a talk by famed physicist Arno Penzias, who by that time was older and more of a scientific philosopher and had taken up the topic of artificial intelligence. This was in the mid-80s. He said something interesting which I've never forgotten. He brought up a supposed statement by Albert Einstein regarding computers, apparently made in the ENIAC days, that in the future there would be dozens or even hundreds of computers, they would each fit in a single room and that they would control almost every aspect of our lives, like some benevolent (or not) overlords. Obviously this prediction was wrong in two ways--computers got a lot smaller and a lot faster than predicted but didn't control our lives, at least not at that point.
The point of all that, he said, was that in regards to technological advance, we tend to underestimate the degree that technology will advance while simultaneously overestimating the effect those advances will have.
If I compare using Microsoft Word on a PC in 1991 with using Office 365 on a PC in 2021, the technological differences are astounding. The modern PC will have a thousand times the memory and I don't know how many times the processor power. Office 365 is connected online and has a myriad of features that were only hinted at if even that in 1991. And yet the way I use it the most has hardly changed at all.
With AI-type applications, there's a similar phenomenon that I think can be explained by repurposing Penzias' statement. With AI, we may underestimate the advances in technology, speed/memory/neural nets and so forth, and simultaneously overestimate how effective those advances will be in solving our problem. The advances may be stupendous,
but what is still unknown is how hard the task is to accomplish. Computers have only fairly recently (20 years, OK, recent by some standards) gotten to the point of being equivalent to humans at chess, which is really a game much better suited to a CPU than a human brain. And that was only accomplished with a very expensive many-year effort and involved a computer that was the size of a small car and used a lot more than a few hundred watts. Nobody knows how hard it might be to make a car FSD with only camera images. I suspect it is going to require a lot more specific programming and a lot less machine learning, but we'll see. Quite a few companies that supposedly can hire the best talent have been working on this for years and frankly I don't think any of them have made any real progress, just show-time crap that is more dangerous than helpful.
Consider chat-bots that companies use for customer service. Can any of them pass Turing muster? Can they go three back-and-forth responses without the user realizing they are a bot? Isn't there a huge financial incentive to develop a chat-bot that could replace all the people in a call center? The gigabytes and gigaflops are there, the money is there, but no solution has been found so far.