Given the stakes, if you already accept the expected utility maximization decision principle, it’s enough to become convinced that there is even a nontrivial probability of this happening. The paper seems to be adequate for snapping the reader’s mind out of conviction in the absurdity and impossibility of dangerous AI.
The stakes on the other side of the equation are also the survival of the human race.
Refraining from developing AI unless we can formally prove it is safe may also lead to extinction if it reduces our ability to cope with other existential threats,
“Enough” is ambiguous; your point is true but doesn’t affect Vladimir’s if he meant “enough to justify devoting a large amount of your attention (given the current distribution of allocated attention) to the risk of UFAI hard takeoff”.
Given the stakes, if you already accept the expected utility maximization decision principle, it’s enough to become convinced that there is even a nontrivial probability of this happening. The paper seems to be adequate for snapping the reader’s mind out of conviction in the absurdity and impossibility of dangerous AI.
The stakes on the other side of the equation are also the survival of the human race.
Refraining from developing AI unless we can formally prove it is safe may also lead to extinction if it reduces our ability to cope with other existential threats,
“Enough” is ambiguous; your point is true but doesn’t affect Vladimir’s if he meant “enough to justify devoting a large amount of your attention (given the current distribution of allocated attention) to the risk of UFAI hard takeoff”.