To be clear I also am concerned, but at lower probability levels and mostly not about doom. The laughable part is the specific “our light cone is about to get ripped to shreds” by a paperclipper or the equivalent, because of an overconfident and mostly incorrect EY/LW/MIRI argument involving supposed complexity of value, failure of alignment approaches, fast takeoff, sharp left turn, etc.
I of course agree with Aaro Salosensaari that many of the concerned experts were/are downstream of LW. But this also works the other way to some degree: beliefs about AI risk will influence career decisions, so it’s obviously not surprising that most working on AI capability research think risk is low and those working on AI safety/alignment think the risk is greater.
To be clear I also am concerned, but at lower probability levels and mostly not about doom. The laughable part is the specific “our light cone is about to get ripped to shreds” by a paperclipper or the equivalent, because of an overconfident and mostly incorrect EY/LW/MIRI argument involving supposed complexity of value, failure of alignment approaches, fast takeoff, sharp left turn, etc.
I of course agree with Aaro Salosensaari that many of the concerned experts were/are downstream of LW. But this also works the other way to some degree: beliefs about AI risk will influence career decisions, so it’s obviously not surprising that most working on AI capability research think risk is low and those working on AI safety/alignment think the risk is greater.