When I read other people, I often feel like they’re operating in a ‘narrower segment of their model’, or not trying to fit the whole world at once, or something. They often seem to emit sentences that are ‘not absurd’, instead of ‘on their mainline’, because they’re mostly trying to generate sentences that pass some shallow checks instead of ‘coming from their complete mental universe.’
To me it seems like this is what you should expect other people to look like both when other people know less about a domain than you do, and also when you’re overconfident about your understanding of that domain. So I don’t think it helps distinguish those two cases.
(Also, to me it seems like a similar thing happens, but with the positions reversed, when Paul and Eliezer try to forecast concrete progress in ML over the next decade. Does that seem right to you?)
when Eliezer responded with:
But there’s a really really basic lesson here about the different style of “sentences found in political history books” rather than “sentences produced by people imagining ways future politics could handle an issue successfully”.
the subject got changed.
I believe this was discussed further at some point—I argued that Eliezer-style political history books also exclude statements like “and then we survived the cold war” or “most countries still don’t have nuclear energy”.
Also, to me it seems like a similar thing happens, but with the positions reversed, when Paul and Eliezer try to forecast concrete progress in ML over the next decade. Does that seem right to you?
It feels similar but clearly distinct? Like, in that situation Eliezer often seems to say things that I parse as “I don’t have any special knowledge here”, which seems like a different thing than “I can’t easily sample from my distribution over how things go right”, and I also have the sense of Paul being willing to ‘go specific’ and Eliezer not being willing to ‘go specific’.
I believe this was discussed further at some point—I argued that Eliezer-style political history books also exclude statements like “and then we survived the cold war” or “most countries still don’t have nuclear energy”.
I think I’m a little cautious about this line of discussion, because my model doesn’t strongly constrain the ways that different groups respond to increasing developments in AI. The main thing I’m confident about is that there will be much clearer responses available to us once we have a better picture of AI development.
(Or maybe a bit earlier and later, but that was my best guess for where to start the context.)
The main quotes from the middle that seems relevant:
[Ngo][18:19, moved two down in log]
(As a side note, I think that if Eliezer had been around in the 1930s, and you described to him what actually happened with nukes over the next 80 years, he would have called that “insanely optimistic”.)
[Yudkowsky][18:21]
Mmmmmmaybe. Do note that I tend to be more optimistic than the average human about, say, global warming, or everything in transhumanism outside of AGI.
Nukes have going for them that, in fact, nobody has an incentive to start a global thermonuclear war. Eliezer is not in fact pessimistic about everything and views his AGI pessimism as generalizing to very few other things, which are not, in fact, as bad as AGI.
[Yudkowsky][18:22]
But yeah, compared to pre-1946 history, nukes actually kind of did go really surprisingly well!
Like, this planet used to be a huge warring snakepit of Great Powers and Little Powers and then nukes came along and people actually got serious and decided to stop having the largest wars they could fuel.
and ending with:
[Yudkowsky][18:38]
And Eliezer is capable of being less concerned about things when they are intrinsically less concerning, which is why my history does not, unlike some others in this field, involve me running also being Terribly Concerned about nuclear war, global warming, biotech, and killer drones.
Rereading that section, my sense is that it reads like a sort of mirror of the Eliezer->Paul “I don’t know how to operate your view” section; like, Eliezer can say “I think nukes are less worrying for reasons ABC, also you can observe me being not worried about other things-people-are-concerned-by XYZ”, but I wouldn’t have expected you (or the reader who hasn’t picked up Eliezer-thinking from elsewhere) to have been able to come away from that with why you trying to be Eliezer from 1930s would have thought ‘and then it turned out okay’ would have been a political-history-book-sentence, or the relative magnitudes of the surprise. [Like, I think my 1930s-Eliezer puts like 3-30% on “and then it turned out okay” for nukes, and my 2020s-Eliezer puts like 0.03-3% on that for AGI? But it’d be nice to hear if Eliezer thinks AGI turning out as well as nukes is like 10x the surprise of nukes turning out this well conditioned on pre-1930s, or more like 1000x the surprise.]
To me it seems like this is what you should expect other people to look like both when other people know less about a domain than you do, and also when you’re overconfident about your understanding of that domain. So I don’t think it helps distinguish those two cases.
(Also, to me it seems like a similar thing happens, but with the positions reversed, when Paul and Eliezer try to forecast concrete progress in ML over the next decade. Does that seem right to you?)
I believe this was discussed further at some point—I argued that Eliezer-style political history books also exclude statements like “and then we survived the cold war” or “most countries still don’t have nuclear energy”.
It feels similar but clearly distinct? Like, in that situation Eliezer often seems to say things that I parse as “I don’t have any special knowledge here”, which seems like a different thing than “I can’t easily sample from my distribution over how things go right”, and I also have the sense of Paul being willing to ‘go specific’ and Eliezer not being willing to ‘go specific’.
You’re thinking of this bit of the conversation, starting with:
(Or maybe a bit earlier and later, but that was my best guess for where to start the context.)
The main quotes from the middle that seems relevant:
and ending with:
Rereading that section, my sense is that it reads like a sort of mirror of the Eliezer->Paul “I don’t know how to operate your view” section; like, Eliezer can say “I think nukes are less worrying for reasons ABC, also you can observe me being not worried about other things-people-are-concerned-by XYZ”, but I wouldn’t have expected you (or the reader who hasn’t picked up Eliezer-thinking from elsewhere) to have been able to come away from that with why you trying to be Eliezer from 1930s would have thought ‘and then it turned out okay’ would have been a political-history-book-sentence, or the relative magnitudes of the surprise. [Like, I think my 1930s-Eliezer puts like 3-30% on “and then it turned out okay” for nukes, and my 2020s-Eliezer puts like 0.03-3% on that for AGI? But it’d be nice to hear if Eliezer thinks AGI turning out as well as nukes is like 10x the surprise of nukes turning out this well conditioned on pre-1930s, or more like 1000x the surprise.]