I missed the first chunk of your conversation with Dylan at the lurkshop about this, but at the time, it sounded like you suspected “quite large” wasn’t 6-48 months, but maybe more than a decade.
A decade in-expectation seems quite extreme.
To be clear, I don’t think AGI happening soon is particularly overdetermined, so I do think this is a thing that does actually differ quite a bit depending on details, but I do think it’s very unlikely that actions that people adjacent to rationality took that seriously sped up timelines by more than a decade. I would currently give that maybe 3% probability or something.
People think the speed-up by rationalists is only ~5 years? I thought people were thinking 10-40. I do not think I would trade the entire history of LessWrong, including the articulation of the alignment problem, for 5 years of timelines. I mean, maybe it’s the right call, but it hardly seems obvious.
When LessWrong was ~dead (before we worked on the revival) I had this strong sense that being able to even consider that OpenAI could be bad for the world, or the notion that the alignment problem wasn’t going to go okay by-default, was being edged out of the overton window, and I felt enough pressure that I could barely think about it with anyone else. I think without the people on LessWrong writing to each other about it, I wouldn’t really be able to think clearly about the situation, and people would have collectively made like ~100x less of a direct effort on things.
(To be clear, I think the absolute probability is still dire, this has not been sufficient to solve things).
And of course that’s just since the revival, the true impact counts the sequences, and much of the articulation of the problem at all.
As bad as things are now, I think we all could’ve been a lot less sane in very nearby worlds.
I mean, I don’t see the argument for more than that. Unless you have some argument for hardware progress stopping, my sense is that things would get cheap enough that someone is going to try the AI stuff that is happening today within a decade.
A decade in-expectation seems quite extreme.
To be clear, I don’t think AGI happening soon is particularly overdetermined, so I do think this is a thing that does actually differ quite a bit depending on details, but I do think it’s very unlikely that actions that people adjacent to rationality took that seriously sped up timelines by more than a decade. I would currently give that maybe 3% probability or something.
People think the speed-up by rationalists is only ~5 years? I thought people were thinking 10-40. I do not think I would trade the entire history of LessWrong, including the articulation of the alignment problem, for 5 years of timelines. I mean, maybe it’s the right call, but it hardly seems obvious.
When LessWrong was ~dead (before we worked on the revival) I had this strong sense that being able to even consider that OpenAI could be bad for the world, or the notion that the alignment problem wasn’t going to go okay by-default, was being edged out of the overton window, and I felt enough pressure that I could barely think about it with anyone else. I think without the people on LessWrong writing to each other about it, I wouldn’t really be able to think clearly about the situation, and people would have collectively made like ~100x less of a direct effort on things.
(To be clear, I think the absolute probability is still dire, this has not been sufficient to solve things).
And of course that’s just since the revival, the true impact counts the sequences, and much of the articulation of the problem at all.
As bad as things are now, I think we all could’ve been a lot less sane in very nearby worlds.
I mean, I don’t see the argument for more than that. Unless you have some argument for hardware progress stopping, my sense is that things would get cheap enough that someone is going to try the AI stuff that is happening today within a decade.
some people who would have been working on ai without lesswrong: sutskever, graves, bengio, hinton, lecun, schmidhuber, hassabis,
“When LessWrong was ~dead”
Which year are you referring to here?
2016-17
Added: To give context, here’s a list of number of LW posts by year:
2009: 852
2010: 1143
2011: 3002
2012: 2583
2013: 1973
2014: 1797
2015: 2002 (<– This should be ~1880, as we added all ~120 HPMOR posts and backdated them to 2015)
2016: 1303 (<– This is the most ‘dead’ year according to me, and the year with the fewest posts)
2017: 1671 (<– LW 2.0 revived in the second half of this year)
2018: 1709
2019: 2121
2020: 3099
2021: 3230
2022: 4538
First quarter of 2023: 1436, if you 4x that it is 5744
(My, it’s getting to be quite a lot of posts these days.)
Makes sense! Less confused now.