is the world much more or less prepared for AGI than it was 15 years ago?
I think much better.
How much did the broader x-risk community change it at all?
I don’t really know / tough to answer. Certainly there’s a lot more people talking about the problem, it’s hard to know how much that comes from x-risk community or from vague concerns about AI in the world (my guess is big parts of both). I think we are in a better place with respect to knowledge of technical alignment—we know a fair bit about what the possible approaches are and have taken a lot of positive steps. There is a counterfactual where alignment isn’t even really recognized as a distinct problem and is just lumped in with vague concerns about safety, which would be significantly worse in terms of our ability to work productively on the problem (though I’d love if we were further away from that world).
I think much better.
I don’t really know / tough to answer. Certainly there’s a lot more people talking about the problem, it’s hard to know how much that comes from x-risk community or from vague concerns about AI in the world (my guess is big parts of both). I think we are in a better place with respect to knowledge of technical alignment—we know a fair bit about what the possible approaches are and have taken a lot of positive steps. There is a counterfactual where alignment isn’t even really recognized as a distinct problem and is just lumped in with vague concerns about safety, which would be significantly worse in terms of our ability to work productively on the problem (though I’d love if we were further away from that world).