Within the last two weeks, two sets of things happened: Eliezer Yudkowsky shared a post expressing extreme pessimism about humanity’s likelihood of surviving AGI, and a number of AI research labs published new, highly impressive results. The combination of these two has resulted in a lot of people feeling heightened concern about the AI situation and how we ought to be reacting to it.
There have been calls to pull “fire alarms”, proposals for how to live with this psychologically, people deciding to enter the AI Alignment field, and a significant increase in the number of AI posts submitted to LessWrong.
The following is my own quick advice:
1. Form your own models and anticipations. It’s easy to hear the proclamations of [highly respected] others and/or everyone else reacting and then reflexively update to “aaahhhh”. I’m not saying “aaahhhh” isn’t the right reaction, but I think for any given person it should come after a deliberate step of processing arguments and evidence to figure out your own anticipations. I feel that A concrete bet offer to those with short AI timelines is a great example of this. It lists lots of specific things the authors do (or rather don’t) expect to see. What 2026 looks like is another example I’d point to of someone figuring out their own anticipations.[1]
2. Figure out your own psychology (while focusing on what’s true). Eliezer, Turntrout, and landfish haveeachwritten about their preferred way of reacting to the belief that P(Doom) is very high. My guess is that people who are concluding P(Doom) is high will each need to figure out how to live with it for themselves. My caution is just that whatever strategy you figure out should keep you in touch with reality (or your best estimate of it), even if it’s uncomfortable.
3. Be gentle with yourself. You might find yourself confronting some very upsetting realities right now. That’s okay! The realities are very upsetting (imo). This might take some time to process. Let yourself do that if you need. It might take you weeks, months, or even longer to come terms with the situation. That’s okay.
4. Don’t take rash action, and be cautious about advocating rash action. As far as I know, even the people with the shortest timelines still measure them in years, not weeks. Whatever new information came out these past two weeks, we can take some time to process and figure out our plans. Maybe we should figure out some new bold plans, but I think if that’s true, it was already true before. We can start having conversations now, but upheavals don’t need to happen this second.
5. You may need to be patient about contributions. Feelings of direness about the situation can bleed into feelings of urgency. As above, we’re probably not getting AGI this week (or even this year) according to anyone, so it’s okay to take time to figure out what you (or anyone else) should do. It’s possible that you’re not in a position to make any contributions right now, and that’s also an okay reality. You can work on getting yourself into a better position to contribute without having to do something right now.
6. Beware the unilateralist’s curse. I’m seeing a lot of proposals on LessWrong that aren’t just for research directions, but also things that look more like political action. Political action may well be very warranted, but it’s often something that both can’t be taken back and affects a shared game board. If you’re thinking to start on plans like this, I urge you to engage very seriously with the AI x-risk community before doing things. The fact that certain plans haven’t been enacted already is likely not because no one had thought of them before, but because those plans are fraught.
It might help encourage people to form their opinions if I note that there isn’t broad consensus about P(Doom). Eliezer has most recently expressed his view, but not everyone agrees – some people just haven’t posted about it recently and I don’t think their minds have been entirely changed by recent developments. I am personally inclined to agree with Eliezer’s take, but that’s because I know more of his reasoning and find it compelling. People shouldn’t conclude that there’s consensus in the “AI leadership”, and even if there is, you should still think it through for yourself.
A Quick Guide to Confronting Doom
Within the last two weeks, two sets of things happened: Eliezer Yudkowsky shared a post expressing extreme pessimism about humanity’s likelihood of surviving AGI, and a number of AI research labs published new, highly impressive results. The combination of these two has resulted in a lot of people feeling heightened concern about the AI situation and how we ought to be reacting to it.
There have been calls to pull “fire alarms”, proposals for how to live with this psychologically, people deciding to enter the AI Alignment field, and a significant increase in the number of AI posts submitted to LessWrong.
The following is my own quick advice:
1. Form your own models and anticipations. It’s easy to hear the proclamations of [highly respected] others and/or everyone else reacting and then reflexively update to “aaahhhh”. I’m not saying “aaahhhh” isn’t the right reaction, but I think for any given person it should come after a deliberate step of processing arguments and evidence to figure out your own anticipations. I feel that A concrete bet offer to those with short AI timelines is a great example of this. It lists lots of specific things the authors do (or rather don’t) expect to see. What 2026 looks like is another example I’d point to of someone figuring out their own anticipations.[1]
2. Figure out your own psychology (while focusing on what’s true). Eliezer, Turntrout, and landfish have each written about their preferred way of reacting to the belief that P(Doom) is very high. My guess is that people who are concluding P(Doom) is high will each need to figure out how to live with it for themselves. My caution is just that whatever strategy you figure out should keep you in touch with reality (or your best estimate of it), even if it’s uncomfortable.
3. Be gentle with yourself. You might find yourself confronting some very upsetting realities right now. That’s okay! The realities are very upsetting (imo). This might take some time to process. Let yourself do that if you need. It might take you weeks, months, or even longer to come terms with the situation. That’s okay.
4. Don’t take rash action, and be cautious about advocating rash action. As far as I know, even the people with the shortest timelines still measure them in years, not weeks. Whatever new information came out these past two weeks, we can take some time to process and figure out our plans. Maybe we should figure out some new bold plans, but I think if that’s true, it was already true before. We can start having conversations now, but upheavals don’t need to happen this second.
5. You may need to be patient about contributions. Feelings of direness about the situation can bleed into feelings of urgency. As above, we’re probably not getting AGI this week (or even this year) according to anyone, so it’s okay to take time to figure out what you (or anyone else) should do. It’s possible that you’re not in a position to make any contributions right now, and that’s also an okay reality. You can work on getting yourself into a better position to contribute without having to do something right now.
6. Beware the unilateralist’s curse. I’m seeing a lot of proposals on LessWrong that aren’t just for research directions, but also things that look more like political action. Political action may well be very warranted, but it’s often something that both can’t be taken back and affects a shared game board. If you’re thinking to start on plans like this, I urge you to engage very seriously with the AI x-risk community before doing things. The fact that certain plans haven’t been enacted already is likely not because no one had thought of them before, but because those plans are fraught.
It might help encourage people to form their opinions if I note that there isn’t broad consensus about P(Doom). Eliezer has most recently expressed his view, but not everyone agrees – some people just haven’t posted about it recently and I don’t think their minds have been entirely changed by recent developments. I am personally inclined to agree with Eliezer’s take, but that’s because I know more of his reasoning and find it compelling. People shouldn’t conclude that there’s consensus in the “AI leadership”, and even if there is, you should still think it through for yourself.