There is careful futurism, where you try to consider all the biases you know, and separate your analysis into logical parts, and put confidence intervals around things, and use wider confidence intervals where you have less constraining knowledge, and all that other stuff rationalists do.
I would love to see someone do that to the AGI and (U)FAI predictions.
I would love to see someone do that to the AGI and (U)FAI predictions.