I think the most likely is lack of funding for intermediate steps. But I think if it happens, it’s not going to be coordinated action motivated by AGI safety concerns, but another “AI winter” motivated by short-termism about profit margins.
On the anthropics question: No, Mr. Bond, we should expect ourselves to die. If you’re about to get shot in the head, you don’t “expect the gun to jam”, you just expect to probably die (and therefore take actions like buying life insurance or a bulletproof helmet based on that belief). Anthropic reasoning about the future is tricky precisely because it’s tempting to neglect this possibility.
I think the most likely is lack of funding for intermediate steps. But I think if it happens, it’s not going to be coordinated action motivated by AGI safety concerns, but another “AI winter” motivated by short-termism about profit margins.
On the anthropics question: No, Mr. Bond, we should expect ourselves to die. If you’re about to get shot in the head, you don’t “expect the gun to jam”, you just expect to probably die (and therefore take actions like buying life insurance or a bulletproof helmet based on that belief). Anthropic reasoning about the future is tricky precisely because it’s tempting to neglect this possibility.