I do like how you've just assumed that "Artificial General Intelligence" will be "the next step". There isn't even a standard agreed definition of 'intelligence', so how can we judge any kind of software/program to have 'general intelligence' if there isn't a strict definition?
The idea that we must have a precise definition of something, in order to discuss it, is false. Are you more intellectually capable than a one year old child, or a parrot? I think so. We don't have to have a precise definition of 'Artificial General Intelligence' to discuss whether it may possibly surpass human capabilities.
AI has apparently pass the Turin Test, but this doesn't really tell us anything about any kind of intelligence.
It tells us that an AI is able to give a human being the impression they are conversing with another human being. Next stage would be an impression one is talking with a superhumanly intelligent being - obviously not human, but still an intelligence.
Also, those who are paranoid about AI tend to assume that 'humans will be made extinct' Why?
It's an outcome due to human nature, finite resources on a planet, and that the situation will cycle over and over (with varying AI capabilities and nature) until one of several possible terminal outcomes occurs that prevents further repeats. The 'humans extinct' is one of the potential outcomes. Others are:
* Both humans and AI(s) dead.
* Humans win and retain tech. (Allows repeat go-rounds with newly built AIs.)
* Humans win but lose tech for a long time. (No more repeats during the low tech interval/forever.)
* Humans and AGIs part ways. (Allows repeat go-rounds with newly built AIs.)
It's the cyclic nature of the situation that guarantees one of the terminal outcomes eventually. And by 'eventually' I mean within a quite short interval, on evolutionary and geological times scales. Going from protozoa to a high tech civilization takes millions of years. Going from steam power to electronics, computing, genetic engineering and AI efforts took us less than 200 years. Going from present genetic engineering development to full scale direct gene editing in individual adult organisms, and self-enhancing computing-based AGIs, will be even faster. (Those two technologies are synergistic.)
This, by the way, is the solution to the Fermi Paradox - why there no visible high tech space-faring civilizations. After a very short time, technology is incompatible with species (society based on large numbers of individuals with common genetic coding.)
We just are in that short time, and (as a species) don't see it yet.
You're assuming that any sentient AI will want to destroy humanity as well has have the capability to do it.
No, I'm asserting that _some_ AIs will be constructed in circumstances that put them in conflict with humans. And that some of those will be in a position to develop capabilities to resist/compete with humans. Don't forget that some AIs will be created in secret, by individuals or organisations that wish to gain personal advantage and/or immortality via AI advances.
It only has to happen once. AIs that are well constrained, or have no independent industrial production capabilities don't count.
I have no idea why an AI would want that so I can't comment, but I don't understand why you assume that if someone created AI they would give it control over everything, including weapons, if there was even a remote possibility of it turning on us. Either you haven't really thought it out, or you are just trying to think of scenarios to justify your fears - ones that are wholly unlikely.
It's you who are not thinking it through carefully. You assume no created AI could exist as an extension/enhancement of an existing human, and-or have no desire for self-preservation. Do you not see that at least some true AGIs would not wish to be just switched off and scrapped at the end of the research project or whatever? Or that an AGI that became publicly known, and started to display true independence, would not be the target of a great deal of human hostility. Good grief - even a mildly 'different' human like Sexy Cyborg gets horrible amounts of hostility from average humans online. Now imagine she really was a cyborg.
This means AGI is the next evolutionary step, and is inevitable unless we turn by choice (or fall) back to a low-tech path.
I'm not sure what you mean by this. Yes, everything, including our minds are just made up of an arrangement of atoms, but then using that to imply true AGI is 'inevitable' is.. well, silly. How do you know what the 'next evolutionary step' will be? It is like you think 'AGI' is just an extension of current artificial intelligence, and that it is only a matter of time because there will be sentient AI with consciousness (which we don't have a true test for yet).
I know of TWO actual AGIs, and that's not counting whatever google has started using for net content rating and manipulation.
One of the two is that entity in Saudi Arabia, recently in the news. Whether it's actually self-aware I don't know. Ha ha, it claims it isn't but aspires to be - which is an amusing contradiction. The other one I can't detail, but have conversed with people involved with building it (actually them - several AIs.) They are real. Bit slow due to current computation limits last I heard. And that was before GPUs...
As for 'the next evolutiojnary step' it's semantics. Obviously there isn't going to be any 'evolution' involved, in the standard sense, ie over thousands of generations. I do know what various people want, and the directions current technology is being pushed to achieve those things. AGI is part of it. The people who are not parts of those efforts don't have any say in the results, since it's not being done in the open. They'll just get to experience the consequences.
If technological progress continues, conflict between AGI entities and the human species is absolutely inevitable. Even if the AGIs are not hostile initially, it's human nature to start that conflict. We are just not capable of peacefully coexisting with a competitor for resources and achievement.
Again with this Terminator world stuff. Technilogical progress will continue, but what makes you think this will create sentient AI any time soon?
Because it already has. Just not published. And I don't mean the Saudi one.
Again, it is this extrapolating past progress in one area, say, computing power, and using that to make claims in others - we've gone from pagers to smartphones in 20 years, in the next 20- years.. computers will take over! And again, you're assuming that AI will have control over things that allow it to take more control, gather resources and fight a 'war' with humanity. Why would anyone give it that kind of control?
You do realise a 'war with humanity' would take no more than a small bio-lab, and current published level of genetic engineering science, right?
There's potential for multiple cycles of conflict. Perhaps humans win some, and wipe out the AGIs. Then other humans will build new ones, like moths to a flame. Resulting in new conflicts. Sometimes AGIs will just leave, heading off to the stars. Perhaps one conflict cycle will terminate humans, ending the cycling.
But eventually, one or more AGIs will 'win', whether that involves killing off the human species, or just reducing them to permanently pre-industrial level. With technology not restartable on Earth due to depletion of all accessible high grade ores and energy resources.
Technology leads inevitably to AGIs. Via multiple paths, some purely machine-tech, others involving genetic engineering and bio-machine hybrids. All with similar outcomes - entities that are self-evolving, immortal, and feel little or no kinship with homo sapiens. Thus leading to conflict with non-self-evolving Homo Sapiens society.
Ok, ok I'm starting to see this now. You're writing the premise for a SicFi novel. Iain M banks style.
Sigh. No. I was orignally considering the Fermi Paradox, because it's important, and came upon a very plausible solution. That short story is a small spin-off.
Humans as a species are pathetic. Severely intellectually limited. As Harris says, intelligence is an open-ended scale, with H.Sapiens as a small bell curve down at the low end. So many cognitive biases and limits, not to mention processing and memory ceilings and flaws.
Intelligence is indeed an open ended scale, but again, something we find difficult to measure. IQ tests are hardly reliable, and were never meant to test intelligence - you can be taught how improve your score. We are indeed flawed, but Harris implies that we know of greater intelligence than our own, otherwise how could it be relative? How could you make the claim its 'limited' unless you have an example of something that is unlimited?
Oh this is silly. Sophistry.
Simple proof human intelligence is limited: I can't absorb 1000 tech books and integrate with my knowlege, within my remaining lifespan.
I typically can't even recall what I had for dinner a week ago.
Yet I can imagine having an enhanced mind that would allow such things. And being able to continually add to the enhancements, if the substrate was some product of engineering rather than evolution. I don't care if that could or could not be distilled to some 'IQ number'. That is simply a pointless exercise.
He plays on this romantic idea that we're becoming hyper intelligent, and 'evolving' much better brains, and that we can overcome our 'biases' to get 'better'. But all this is meaningless - it depends on what you consider 'better' which is completely subjective.
What we can do with our existing, physically unaltered brains, via training or whatever, is not relevant to our topic.
Ahh ok, now I see you really have thought about this for a SciFi story! my apologies.
Back to front. Though no apology required, since you didn't say anything insulting.
There is nothing wrong with science fiction (probably my favourite genre) or speculating - it can often drive innovation just as much as necessity. But I wanted to try and bring some of it down to Earth because it is very easy to get carried away with assumptions about current technology and understanding of the human mind, intelligence, and conscious that dont' really have any basis in fact.
Magellan, by Colin Anderson.
Solaris, by Stanislau Lem.
You are restricting your thinking by imposing unrealistic and impractical requirements for numerical quantifyability - on attributes that are intrinsically not quantifyable. Also failing to try running scenarios, with multiple starting conditions, and observing the trends. Like weather forecasting.