Alright, I added the word (aligned) to the title, although I don’t think it changes much to the point I’m making. My argument is that we will have to turn the aligned ASI on, in (somewhat) full knowledge of what will then happen. The argument is “if ASI is inevitable and the first ASI takes over society” (claim A), then we must actively work on achieving A. And of course it would be better to have the ASI aligned by that point, as a matter of self-interest. But maybe you can think of a better title.
The best-case scenario I outlined was surely somewhat of a reach, because who knows what concrete steps the ASI will take. But I think that one of its earliest sub-goals would be to increase its own “intelligence” (computing power). Whether it will try to aggressively hack other devices is a different question, but I think it should take this precautionary step if a misaligned AI apocalypse is imminent.
Another question is to what degree an aligned ASI will try to seize political power. If it doesn’t proactively do so, will it potentially aid governments in decision-making? If it does proactively seek power, will it return some of the power to human parliaments to ensure some degree of human autonomy? In any case, we need to ask ourselves how autonomous we still are at this point, or if parliamentary decision-making is only a facade to give us an illusion of autonomy.
Alright, I added the word (aligned) to the title, although I don’t think it changes much to the point I’m making. My argument is that we will have to turn the aligned ASI on, in (somewhat) full knowledge of what will then happen. The argument is “if ASI is inevitable and the first ASI takes over society” (claim A), then we must actively work on achieving A. And of course it would be better to have the ASI aligned by that point, as a matter of self-interest. But maybe you can think of a better title.
The best-case scenario I outlined was surely somewhat of a reach, because who knows what concrete steps the ASI will take. But I think that one of its earliest sub-goals would be to increase its own “intelligence” (computing power). Whether it will try to aggressively hack other devices is a different question, but I think it should take this precautionary step if a misaligned AI apocalypse is imminent.
Another question is to what degree an aligned ASI will try to seize political power. If it doesn’t proactively do so, will it potentially aid governments in decision-making? If it does proactively seek power, will it return some of the power to human parliaments to ensure some degree of human autonomy? In any case, we need to ask ourselves how autonomous we still are at this point, or if parliamentary decision-making is only a facade to give us an illusion of autonomy.