Looking at the page of Facing the Singularity I just realized again how wrong it is from the perspective of convincing people who are not already inclined to believe that stuff. The header, the title, the text...wrong, wrong, wrong!
Facing the Singularity
The advent of an advanced optimization process and its global consequences
Sometime this century, machines will surpass human levels of intelligence and ability. This event — the “Singularity” — will be the most important event in our history, and navigating it wisely will be the most important thing we can ever do.
The speed of technological progress suggests a non-negligible probability of the invention of advanced general purpose optimization processes, sometime this century, exhibiting many features of general intelligence as envisioned by the proponents of strong AI (artificial intelligence that matches or exceeds human intelligence) while lacking other important characteristics.
This paper will give a rough overview of 1) the expected power of such optimization processes 2) the lack of important characteristics intuitively associated with intelligent agents, like the consideration of human values in optimizing the environment 3) associated negative consequences and their expected scale 4) the importance of research in preparation of such a possibility 5) a bibliography of advanced supplementary material.
I see the problem you’re pointing out, but I disagree with your solution. If the title and intro are that technical, then it’s not off-putting to skeptics, it’s just… boring.
Looking at the page of Facing the Singularity I just realized again how wrong it is from the perspective of convincing people who are not already inclined to believe that stuff. The header, the title, the text...wrong, wrong, wrong!
The advent of an advanced optimization process and its global consequences
The speed of technological progress suggests a non-negligible probability of the invention of advanced general purpose optimization processes, sometime this century, exhibiting many features of general intelligence as envisioned by the proponents of strong AI (artificial intelligence that matches or exceeds human intelligence) while lacking other important characteristics.
This paper will give a rough overview of 1) the expected power of such optimization processes 2) the lack of important characteristics intuitively associated with intelligent agents, like the consideration of human values in optimizing the environment 3) associated negative consequences and their expected scale 4) the importance of research in preparation of such a possibility 5) a bibliography of advanced supplementary material.
I see the problem you’re pointing out, but I disagree with your solution. If the title and intro are that technical, then it’s not off-putting to skeptics, it’s just… boring.
Unless you’re being sarcastic?