Appreciate the post. I’ve previously donated $600 through the EA Manifund thing and will consider donating again late this year / early next year when thinking through donations more broadly.
In a few ways I feel not fully/spiritually aligned with the LW team and the rationalist community: my alignment difficulty/p(doom()[1] is farther from Eliezer’s[2] than my perception of the median of the LW team[3] (though closer to Eliezer than most EAs), I haven’t felt sucked in by most of Eliezer’s writing, and I feel gut level cynical about people’s ability to deliberatively improve their rationality (edit: with large effect size) (I haven’t spent a long time examining evidence to decide whether I really believe this).
But still LW has probably made a large positive difference in my life, and I’m very thankful. I’ve also enjoyed Lighthaven, but I have to admit I’m not very observant and opinionated on conference venues (or web design, which is why I focused on LW’s content).
Hmm, my guess is we probably don’t disagree very much on timelines. My honest guess is that yours are shorter than mine, though mine are a bit in flux right now with inference compute scaling happening and the slope and reliability of that mattering a lot.
Yeah I meant more on p(doom)/alignment difficulty than timelines, I’m not sure what your guys’ timelines are. I’m roughly in the 35-55% ballpark for a misaligned takeover, and my impression is that you all are closer to but not necessarily all the way at the >90% Eliezer view. If that’s also wrong I’ll edit to correct.
edit: oh maybe my wording of “farther” in the original comment was specifically confusing and made it sound like I was talking about timelines. I will edit to clarify.
Thanks. I edited again to be more precise. Maybe I’m closer to the median than I thought.
(edit: unimportant clarification. I just realized “you all” may have made it sound like I thought every single person on the Lightcone team was higher than my p(doom). I meant it to be more like a generic y’all to represent the group, not a claim about the minimum p(doom) of the team)
Ah, yep, I am definitely more doomy than that. I tend to be around 85%-90% these days. I did indeed interpret you to be talking about timelines due to the “farther”.
Appreciate the post. I’ve previously donated $600 through the EA Manifund thing and will consider donating again late this year / early next year when thinking through donations more broadly.
I’ve derived lots of value with regards to thinking through AI futures from LW/AIAF content (some non-exhaustive standouts: 2021 MIRI conversations, List of Lethalities and Paul response, t-AGI framework, Without specific countermeasures..., Hero Licensing). It’s unclear to me how much of the value would have been retained if LW didn’t exist, but plausibly LW is responsible for a large fraction.
In a few ways I feel not fully/spiritually aligned with the LW team and the rationalist community: my alignment difficulty/p(doom()[1] is farther from Eliezer’s[2] than my perception of the median of the LW team[3] (though closer to Eliezer than most EAs), I haven’t felt sucked in by most of Eliezer’s writing, and I feel gut level cynical about people’s ability to deliberatively improve their rationality (edit: with large effect size) (I haven’t spent a long time examining evidence to decide whether I really believe this).
But still LW has probably made a large positive difference in my life, and I’m very thankful. I’ve also enjoyed Lighthaven, but I have to admit I’m not very observant and opinionated on conference venues (or web design, which is why I focused on LW’s content).
Previously just said “AI forecasts”, edited to make more specific the view that I’m talking about.
Previously said MIRI. edited MIRI → Eliezer since MIRI has somewhat heterogenous views
Previously just said “LW team”, added “the median of” to better represent heterogeneity
Hmm, my guess is we probably don’t disagree very much on timelines. My honest guess is that yours are shorter than mine, though mine are a bit in flux right now with inference compute scaling happening and the slope and reliability of that mattering a lot.
Yeah I meant more on p(doom)/alignment difficulty than timelines, I’m not sure what your guys’ timelines are. I’m roughly in the 35-55% ballpark for a misaligned takeover, and my impression is that you all are closer to but not necessarily all the way at the >90% Eliezer view. If that’s also wrong I’ll edit to correct.
edit: oh maybe my wording of “farther” in the original comment was specifically confusing and made it sound like I was talking about timelines. I will edit to clarify.
Lightcone is also heterogeneous, but I think it’s accurate that the median view at Lightcone is >50% on misaligned takeover
Thanks. I edited again to be more precise. Maybe I’m closer to the median than I thought.
(edit: unimportant clarification. I just realized “you all” may have made it sound like I thought every single person on the Lightcone team was higher than my p(doom). I meant it to be more like a generic y’all to represent the group, not a claim about the minimum p(doom) of the team)
My impression matches your initial one, to be clear. Like my point estimate of the median is like 85%, my confidence only extends to >50%
Ah, yep, I am definitely more doomy than that. I tend to be around 85%-90% these days. I did indeed interpret you to be talking about timelines due to the “farther”.
Do we have any data on p(doom) in the LW/rationalist community? I would guess the median is lower than 35-55%.
It’s not exactly clear where to draw the line, but I would guess this to be the case for, say, the 10% most active LessWrong users.