Third article TL;DR: It is clear that superintelligence singleton is the most obvious solution to prevent all non-AI risks.
However, the main problem is that there is a risk of creation of such singleton (risks of unfriendly AI), risks of implementation it (AI have to fight a war for global domination probably against other AIs, nuclear national states etc) and risks of singletone failure (if it halts—it is forever).
As result, we only move risks from one side equation to another, and even replace known risks with unknown risks.
I think that other possible solutions exist, where many agents unite in some kind of police to monitor each other, like suggested David Brin in his transparent society. Such police may consist not of citizens, but of AIs.
Yes, good points. As for “As result, we only move risks from one side equation to another, and even replace known risks with unknown risks,” another way to put the paper’s thesis is this: insofar as the threat of unilateralism becomes widespread, thus requiring a centralized surveillance apparatus, solving the control problem is that mush more important! I.e., it’s an argument for why MIRI’s work matters.
I think that unilateralist biological risks will be soon here. I modeled their development in my unpublished article about multipandemic, and compare their number with the historical number of computer viruses. There was 1 virus a year in the beginning of 1980s, 1000 a year in 1990, millions in 2000s, millions of malwares a day in 2010s, according to some report on CNN. But the peak of damage was in 1990s as viruses were more destructive at the time and aimed on data deletion, and not much antiviruses were available. Thus it needs around 10 years to move from the technical possibility of creating a virus at home to global mulripandemic.
Third article TL;DR: It is clear that superintelligence singleton is the most obvious solution to prevent all non-AI risks.
However, the main problem is that there is a risk of creation of such singleton (risks of unfriendly AI), risks of implementation it (AI have to fight a war for global domination probably against other AIs, nuclear national states etc) and risks of singletone failure (if it halts—it is forever).
As result, we only move risks from one side equation to another, and even replace known risks with unknown risks.
I think that other possible solutions exist, where many agents unite in some kind of police to monitor each other, like suggested David Brin in his transparent society. Such police may consist not of citizens, but of AIs.
Yes, good points. As for “As result, we only move risks from one side equation to another, and even replace known risks with unknown risks,” another way to put the paper’s thesis is this: insofar as the threat of unilateralism becomes widespread, thus requiring a centralized surveillance apparatus, solving the control problem is that mush more important! I.e., it’s an argument for why MIRI’s work matters.
I think that unilateralist biological risks will be soon here. I modeled their development in my unpublished article about multipandemic, and compare their number with the historical number of computer viruses. There was 1 virus a year in the beginning of 1980s, 1000 a year in 1990, millions in 2000s, millions of malwares a day in 2010s, according to some report on CNN. But the peak of damage was in 1990s as viruses were more destructive at the time and aimed on data deletion, and not much antiviruses were available. Thus it needs around 10 years to move from the technical possibility of creating a virus at home to global mulripandemic.