Hmm. I think it’s an argument for the mindset of preparing for a final safety sprint right near the end.
Something I’ve talked about with others as the idea of, “our best chance to make progress on safety will be right at the moment before we lose control because the AI is too strong. If we can concentrate our work then, and maybe focus on prepping for that time now, we can do a fast sprint and save the world at that critical juncture.”
I feel torn about this, since I work that we might mistime it and overshoot the optimal point without realizing it.
This post isn’t exactly taking about this dynamic, but it kinda fits the rough pattern. I think there’s something to the point being made, but also, I see danger in taking it too far.
Hmm. I think it’s an argument for the mindset of preparing for a final safety sprint right near the end.
Something I’ve talked about with others as the idea of, “our best chance to make progress on safety will be right at the moment before we lose control because the AI is too strong. If we can concentrate our work then, and maybe focus on prepping for that time now, we can do a fast sprint and save the world at that critical juncture.”
I feel torn about this, since I work that we might mistime it and overshoot the optimal point without realizing it.
This post isn’t exactly taking about this dynamic, but it kinda fits the rough pattern. I think there’s something to the point being made, but also, I see danger in taking it too far.