You’re completely right. If you don’t believe it, this post isn’t really trying to update you. This is more to serve as a coordination mechanism for the people who do think the rest isn’t very difficult (which I am assuming is a not-small-number).
Note that I also don’t think the actions advocated by the post are suboptimal even if you only place 3-7 years at 30% probability.
I’m a little worried about what might happen if different parts of the community end up with very different timelines, and thus very divergent opinions on what to do.
It might be useful if we came up with some form of community governance mechanism or heuristics to decide when it becomes justified to take actions that might be seen as alarmist by people with longer timelines. On the one hand, we want to avoid stuff like the unilateralist’s curse, on the other, we can’t wait for absolutely everyone to agree before raising the alarm.
One probably-silly idea: We could maybe do is some kind of trade. Long-timelines people agree to work on short-timelines people’s projects over the next 3 years. Then if the world isn’t destroyed, the short-timelines people work for the long-timelines people’s projects for the following 15 years. Or something.
My guess is that the details are too fraught to get something like this to work (people will not be willing to give up so much value), but maybe there’s a way to get it to work.
You’re completely right. If you don’t believe it, this post isn’t really trying to update you. This is more to serve as a coordination mechanism for the people who do think the rest isn’t very difficult (which I am assuming is a not-small-number).
Note that I also don’t think the actions advocated by the post are suboptimal even if you only place 3-7 years at 30% probability.
I’m a little worried about what might happen if different parts of the community end up with very different timelines, and thus very divergent opinions on what to do.
It might be useful if we came up with some form of community governance mechanism or heuristics to decide when it becomes justified to take actions that might be seen as alarmist by people with longer timelines. On the one hand, we want to avoid stuff like the unilateralist’s curse, on the other, we can’t wait for absolutely everyone to agree before raising the alarm.
One probably-silly idea: We could maybe do is some kind of trade. Long-timelines people agree to work on short-timelines people’s projects over the next 3 years. Then if the world isn’t destroyed, the short-timelines people work for the long-timelines people’s projects for the following 15 years. Or something.
My guess is that the details are too fraught to get something like this to work (people will not be willing to give up so much value), but maybe there’s a way to get it to work.