And they’re planning on aligning the AI in 4 years or presumably dying trying. Seems like a big & bad change from their previous promise to pause if they can’t get alignment down.
What makes you think they don’t plan to pause?
Also not sure why you think they plan to die if they don’t solve alignment in 4 years / what you wish they’d say (‘we plan to solve alignment in 40 years’?) / what you mean.
Oh you’re right. I misread a part of the text to think they were working on making superintelligence and also aligning the superintelligence in 4 years, and commented about alignment in 4 years being very ambitious of them. Oops
What makes you think they don’t plan to pause?
Also not sure why you think they plan to die if they don’t solve alignment in 4 years / what you wish they’d say (‘we plan to solve alignment in 40 years’?) / what you mean.
Oh you’re right. I misread a part of the text to think they were working on making superintelligence and also aligning the superintelligence in 4 years, and commented about alignment in 4 years being very ambitious of them. Oops