(Apologies for the late reply!) I think working on improved institutions is a good goal that could potentially help, and I’m excited about some of the work going on in general categories you mentioned. It’s not my focus because (a) I do think the “timelines don’t match up” problem is big; (b) I think it’s really hard to identify specific interventions that would improve all decision-making—it’s really hard to predict the long-run effects of any given reform (e.g., a new voting system) as the context changes. Accordingly, what feels most pressing to me is getting more clarity on specific measures that can be taken to reduce the biggest risks to humanity, and then looking specifically at which institutional changes would make the world better-positioned to evaluate and act on those types of measures. Hence my interest in AI strategy “nearcasting” and in AI safety standards.
(Apologies for the late reply!) I think working on improved institutions is a good goal that could potentially help, and I’m excited about some of the work going on in general categories you mentioned. It’s not my focus because (a) I do think the “timelines don’t match up” problem is big; (b) I think it’s really hard to identify specific interventions that would improve all decision-making—it’s really hard to predict the long-run effects of any given reform (e.g., a new voting system) as the context changes. Accordingly, what feels most pressing to me is getting more clarity on specific measures that can be taken to reduce the biggest risks to humanity, and then looking specifically at which institutional changes would make the world better-positioned to evaluate and act on those types of measures. Hence my interest in AI strategy “nearcasting” and in AI safety standards.