My view is not “can no longer do any good,” more like “can do less good in expectation than if you had still some time left before ASI to influence things.” For reasons why, see linked comment above.
I think that by the time Metaculus is convinced that ASI already exists, most of the important decisions w.r.t. AI safety will have already been made, for better or for worse. Ditto (though not as strongly) for AI concentration-of-power risks and AI misuse risks.
My view is not “can no longer do any good,” more like “can do less good in expectation than if you had still some time left before ASI to influence things.” For reasons why, see linked comment above.
I think that by the time Metaculus is convinced that ASI already exists, most of the important decisions w.r.t. AI safety will have already been made, for better or for worse. Ditto (though not as strongly) for AI concentration-of-power risks and AI misuse risks.