Ten years ago I expressed similar misgivings. Such scenarios, no matter how ‘logical’, are too easily invalidated by something not yet known. Better e.g. to treat them as strongly hypothetical, and the problem of superintelligent AI as ‘almost certainly not hypothetical’. But, we face the future with the institutions we have, not the institutions we wish we have, and part of the culture of MIRI et al is an attachment to particular scenarios of the long-term future. So be it.
Ten years ago I expressed similar misgivings. Such scenarios, no matter how ‘logical’, are too easily invalidated by something not yet known. Better e.g. to treat them as strongly hypothetical, and the problem of superintelligent AI as ‘almost certainly not hypothetical’. But, we face the future with the institutions we have, not the institutions we wish we have, and part of the culture of MIRI et al is an attachment to particular scenarios of the long-term future. So be it.