“It is possible that there is, in practice, no middle path. That our only three available choices, as a planet, are ‘build AGI almost as fast as possible, assume alignment is easy on the first try and that the dynamics that arise after solving alignment can be solved before catastrophe as well,’ ‘build AGI as fast as possible knowing we will likely die because AGI replacing humans is good actually’ or ‘never build a machine in the image of a human mind.’”
I think we are largely in agreement, except that I think this scenario is by far the likeliest. Whether or not a middle path exists in theory, I see no way to it in practice. I don’t know what level of risk justifies outright agitation for Butlerian jihad, smash all the microchips and execute anyone who tries to make them, but it’s a a good deal lower than 50%.
“It is possible that there is, in practice, no middle path. That our only three available choices, as a planet, are ‘build AGI almost as fast as possible, assume alignment is easy on the first try and that the dynamics that arise after solving alignment can be solved before catastrophe as well,’ ‘build AGI as fast as possible knowing we will likely die because AGI replacing humans is good actually’ or ‘never build a machine in the image of a human mind.’”
I think we are largely in agreement, except that I think this scenario is by far the likeliest. Whether or not a middle path exists in theory, I see no way to it in practice. I don’t know what level of risk justifies outright agitation for Butlerian jihad, smash all the microchips and execute anyone who tries to make them, but it’s a a good deal lower than 50%.