The key sentiment of this post that I currently agree with:
There’s a bit of a short timelines “bug” in the Berkeley rationalist scene, where short timelines have become something like the default assumption (or at least are not unusual).
There don’t seem to be strong, public reasons for this view.
It seems like most people who are sympathetic to short timelines are sympathetic to it mainly as the result of a social proof cascade.
But this is obscured somewhat, because some folks who opinions are being trusted, don’t show their work (rightly or wrongly), because of info security considerations.
I think Gwern has now made a relatively decent public case? Or at least I feel substantially less confused about the basic arguments, which I think I can relatively accurately summarize as “sure seems like there is a good chance just throwing more compute at the problem will get us there”, with then of course a lot of detail about why that might be the case.
Is it really true that most people sympathetic to short timelines are thus mainly due to social proof cascade? I don’t know any such person myself; the short-timelines people I know are either people who have thought about it a ton and developed detailed models, or people who just got super excited about GPT-3 and recent AI progress basically. The people who like to defer to others pretty much all have medium or long timelines, in my opinion, because that’s the respectable/normal thing to think.
The key sentiment of this post that I currently agree with:
There’s a bit of a short timelines “bug” in the Berkeley rationalist scene, where short timelines have become something like the default assumption (or at least are not unusual).
There don’t seem to be strong, public reasons for this view.
It seems like most people who are sympathetic to short timelines are sympathetic to it mainly as the result of a social proof cascade.
But this is obscured somewhat, because some folks who opinions are being trusted, don’t show their work (rightly or wrongly), because of info security considerations.
I think Gwern has now made a relatively decent public case? Or at least I feel substantially less confused about the basic arguments, which I think I can relatively accurately summarize as “sure seems like there is a good chance just throwing more compute at the problem will get us there”, with then of course a lot of detail about why that might be the case.
Is it really true that most people sympathetic to short timelines are thus mainly due to social proof cascade? I don’t know any such person myself; the short-timelines people I know are either people who have thought about it a ton and developed detailed models, or people who just got super excited about GPT-3 and recent AI progress basically. The people who like to defer to others pretty much all have medium or long timelines, in my opinion, because that’s the respectable/normal thing to think.