If we have total conviction that the end of the world is nigh, isn’t it rational to consider even awful, unpalatable options for extending the timeline before we “achieve” AGI?
Eliezer has been very clear that he thinks this is a bad idea, see e.g. Q2 of this post.
Also, keep in mind that a single instance of one AI safety person doing something criminal has the potential for massively damaging the public standing of the community. I think this should dominate the calculation; even if you think the probability that [the arguments from the current post are totally wrong] is low, it’s not that low.
Eliezer has been very clear that he thinks this is a bad idea, see e.g. Q2 of this post.
Also, keep in mind that a single instance of one AI safety person doing something criminal has the potential for massively damaging the public standing of the community. I think this should dominate the calculation; even if you think the probability that [the arguments from the current post are totally wrong] is low, it’s not that low.