I’d put this 3-7 year thing at about 10%, maybe a bit less. So obviously with probability around 10%, capabilities researchers should be doing different things (I would love to say “pivoting en masse to safety and alignment research,” but we’ll see. Since a lot of it would be fake, perhaps that would need to reward/provide outlets for fake safety/alignment research). But EA orgs should still be focusing most attention on longer scales and not going all-in.
I think if you have timelines of 3-7 years at 10%, and alignment research where it is, it’s hard to imagine what kinds of alignment improvements we would get that also facilitate pivotal acts by individual actors way in front. So global coordination starting as soon as possible is still a necessary precondition to avoiding doom, even if we get lucky and have 15 years.
I’d put this 3-7 year thing at about 10%, maybe a bit less. So obviously with probability around 10%, capabilities researchers should be doing different things (I would love to say “pivoting en masse to safety and alignment research,” but we’ll see. Since a lot of it would be fake, perhaps that would need to reward/provide outlets for fake safety/alignment research). But EA orgs should still be focusing most attention on longer scales and not going all-in.
I think if you have timelines of 3-7 years at 10%, and alignment research where it is, it’s hard to imagine what kinds of alignment improvements we would get that also facilitate pivotal acts by individual actors way in front. So global coordination starting as soon as possible is still a necessary precondition to avoiding doom, even if we get lucky and have 15 years.
Reflecting on this and other comments, I decided to edit the original post to retract the call for a “fire alarm”.