It should not take long, given these pieces and a moderate amount of iteration, to create an agentic system capable of long-term decision-making
That is, to put it mildly, a pretty strong claim, and one I don’t think the rest of your post really justifies. Without which it’s still just listing a theoretical thing to worry about
You’re completely right. If you don’t believe it, this post isn’t really trying to update you. This is more to serve as a coordination mechanism for the people who do think the rest isn’t very difficult (which I am assuming is a not-small-number).
Note that I also don’t think the actions advocated by the post are suboptimal even if you only place 3-7 years at 30% probability.
I’m a little worried about what might happen if different parts of the community end up with very different timelines, and thus very divergent opinions on what to do.
It might be useful if we came up with some form of community governance mechanism or heuristics to decide when it becomes justified to take actions that might be seen as alarmist by people with longer timelines. On the one hand, we want to avoid stuff like the unilateralist’s curse, on the other, we can’t wait for absolutely everyone to agree before raising the alarm.
One probably-silly idea: We could maybe do is some kind of trade. Long-timelines people agree to work on short-timelines people’s projects over the next 3 years. Then if the world isn’t destroyed, the short-timelines people work for the long-timelines people’s projects for the following 15 years. Or something.
My guess is that the details are too fraught to get something like this to work (people will not be willing to give up so much value), but maybe there’s a way to get it to work.
I don’t know enough to evaluate this post. I don’t know if it is correct or not. However, a completely convincing explanation could possibly shorten timelines. So is that satisfying? Not really. But the universe doesn’t have to play nice.
That is, to put it mildly, a pretty strong claim, and one I don’t think the rest of your post really justifies. Without which it’s still just listing a theoretical thing to worry about
You’re completely right. If you don’t believe it, this post isn’t really trying to update you. This is more to serve as a coordination mechanism for the people who do think the rest isn’t very difficult (which I am assuming is a not-small-number).
Note that I also don’t think the actions advocated by the post are suboptimal even if you only place 3-7 years at 30% probability.
I’m a little worried about what might happen if different parts of the community end up with very different timelines, and thus very divergent opinions on what to do.
It might be useful if we came up with some form of community governance mechanism or heuristics to decide when it becomes justified to take actions that might be seen as alarmist by people with longer timelines. On the one hand, we want to avoid stuff like the unilateralist’s curse, on the other, we can’t wait for absolutely everyone to agree before raising the alarm.
One probably-silly idea: We could maybe do is some kind of trade. Long-timelines people agree to work on short-timelines people’s projects over the next 3 years. Then if the world isn’t destroyed, the short-timelines people work for the long-timelines people’s projects for the following 15 years. Or something.
My guess is that the details are too fraught to get something like this to work (people will not be willing to give up so much value), but maybe there’s a way to get it to work.
I don’t know enough to evaluate this post. I don’t know if it is correct or not. However, a completely convincing explanation could possibly shorten timelines. So is that satisfying? Not really. But the universe doesn’t have to play nice.
One year later, in light of the ChatGPT internet and shell plugins, do you still think this is just a theoretical thing? Should we worry about it yet?
The fire alarm sentiment seems to have been entirely warranted, even if the plan proposed in the post wouldn’t have been helpful.