In my opinion, this post does not engage with Achenbrenner’s narrative at the object level, and merely objects to the “vibes” and notes that his predictions are questionable. Of course they are; they are predictions about something we’ve never done before. I don’t like the conclusions either, but that does not stop me from taking the arguments seriously.
The object-level claim is that AGI is not immanent, so we shouldn’t freak out about safety and world-dominating power, since that would deprive a lot of people of the benefits of AGI. However, there are exactly zero arguments made for the object-level claim that AGI is still far away.
We know that timelines are difficult to predict. That shouldn’t stop us from taking short timelines seriously, and analyzing arguments as best we can even when it’s difficult.
I commented in more detail and more vehemence over on the EA forum.
In my opinion, this post does not engage with Achenbrenner’s narrative at the object level, and merely objects to the “vibes” and notes that his predictions are questionable. Of course they are; they are predictions about something we’ve never done before. I don’t like the conclusions either, but that does not stop me from taking the arguments seriously.
The object-level claim is that AGI is not immanent, so we shouldn’t freak out about safety and world-dominating power, since that would deprive a lot of people of the benefits of AGI. However, there are exactly zero arguments made for the object-level claim that AGI is still far away.
We know that timelines are difficult to predict. That shouldn’t stop us from taking short timelines seriously, and analyzing arguments as best we can even when it’s difficult.