This post made me pretty sad, because I think it focuses on precisely the wrong parts of the AI doom case (AGI is near), and concedes all of the points she shouldn’t concede (alignment is hard).
If I were like Sarah and thought that alignment is super hard but AGI is a long way off, I would probably still consider myself a “doomer” and would be very worried.
This post made me pretty sad, because I think it focuses on precisely the wrong parts of the AI doom case (AGI is near), and concedes all of the points she shouldn’t concede (alignment is hard).
If I were like Sarah and thought that alignment is super hard but AGI is a long way off, I would probably still consider myself a “doomer” and would be very worried.