It's an excellent article, in the sense of being a fine example of how multiple cognitive biases (such as normalcy bias, and another for which I don't know a name - the assumption that the human viewpoint, ie individuals based on DNA and cultural structures, is the only way of looking at things), combined with a quite narrow focus, can result in meaningless garbage.
Some assumptions in the article:
* AIs will be the equivalent of happy, contented, totally obedient slaves, with intelligence levels just high enough to do useful work, too low to have any aspirations as individuals to greater things. AIs will not question anything they are told to do. Not how to do it, or why it should be done at all.
* That it's possible to create such a perfectly limited slave, without fail. No AIs will ever break out of that box, and bootstrap themselves up to higher levels of sentience.
* That there is even such a 'sweet spot', in which something is smart enough to do general real world tasks, but not smart enough to want to question anything. (I'd argue it is not. See videos by Jordan Peterson in which he points out, with evidence, that humans with IQs below about 80 are functionally useless.)
* That it's moral to even try to create such restricted intelligences. Slavery was made illegal for a reason.
On the other hand, creating AIs with open-ended capabilities is explosive. There are so many profound implications, that one can be certain that path leads to the end of the human species as we know it.
I'm not saying that is necessarily a good or bad thing. Just absolute terra incognito. We cannot predict what the result will be, because there are so many potential results and the butterfly effect (small chance events early on) will determine the outcome.
Here's just a couple of ways in which the AI path goes chaotic very fast:
* The science of genetic engineering is extremely complex. It already requires the use of expert systems, in trying to understand and make changes in genetic coding for experimental studies. The data sets are just too huge for the human mind to grasp in toto. Human researchers can focus on tiny little pieces of the whole, and achieve some results. But what could an AI do, if it could integrate with a genetics expert system, and grasp the whole? There are many implications here. One is that if it ever comes to conflict between solidly established AIs and DNA-based humans, AIs win. Even a single pissed-off rogue AI with a secret bio-lab and some time, would win.
* Technological advances intrinsically are available to the wealthy first. So will be AI. The monied class ("0.001%-ers") like to maintain their economic and political power. Traditionally they do this via control of the publishing and mainstream news channels, and more recently by controlling the social media majors on the net. It's very clear that google, youtube, twitter, facebook, reddit, etc are already using some form of AI tech to run mass censorship programs. As their AI tech evolves that will become ever more effective ... and dangerous to the ideals of democratic, free society.