It’s frustrating how bad dath ilanis (as portrayed by Eliezer) are at understanding other civilisations. They seem to have all dramatically overfit to dath ilan.
To be clear, it’s the type of error which is perfectly sensible for an individual to make, but strange for their whole civilisation to be making (by teaching individuals false beliefs about how tightly constraining their coordination principles are).
The in-universe explanation seems to be that they’ve lost this knowledge as a result of screening off the past. But that seems like a really predictable failure mode which gives them false beliefs about very important topics, so I have trouble imagining it being consistent with the rest of Eliezer’s characterisation of dath ilan.
(FWIW I’ll also note that this is the same type of mistake that I think Eliezer is making when reasoning about AI.)
Tho, to be fair, losing points in universes you don’t expect to happen in order to win points in universes you expect to happen seems like good decision theory.
[I do have a standing wonder about how much of dath ilan is supposed to be ‘the obvious equilbrium’ vs. ‘aesthetic preferences’; I would be pretty surprised if Eliezer thought there was only one fixed point of the relevant coordination functions, and so some of it must be ‘aesthetics’.]
I don’t think dath ilan would try to win points in likely universes by teaching children untrue things, which I claim is what they’re doing.
Also, it’s not clear to me that this would even win them points, because when thinking about designing civilisation (or AGIs) you need to have accurate beliefs about this type of thing. (E.g. imagine dath ilani alignment researchers being like “here are all our principles for understanding intelligence” and then continually being surprised, like Keltham is, about how messy and fractally unprincipled some plausible outcomes are.)
It’s frustrating how bad dath ilanis (as portrayed by Eliezer) are at understanding other civilisations. They seem to have all dramatically overfit to dath ilan.
To be clear, it’s the type of error which is perfectly sensible for an individual to make, but strange for their whole civilisation to be making (by teaching individuals false beliefs about how tightly constraining their coordination principles are).
The in-universe explanation seems to be that they’ve lost this knowledge as a result of screening off the past. But that seems like a really predictable failure mode which gives them false beliefs about very important topics, so I have trouble imagining it being consistent with the rest of Eliezer’s characterisation of dath ilan.
(FWIW I’ll also note that this is the same type of mistake that I think Eliezer is making when reasoning about AI.)
Tho, to be fair, losing points in universes you don’t expect to happen in order to win points in universes you expect to happen seems like good decision theory.
[I do have a standing wonder about how much of dath ilan is supposed to be ‘the obvious equilbrium’ vs. ‘aesthetic preferences’; I would be pretty surprised if Eliezer thought there was only one fixed point of the relevant coordination functions, and so some of it must be ‘aesthetics’.]
I don’t think dath ilan would try to win points in likely universes by teaching children untrue things, which I claim is what they’re doing.
Also, it’s not clear to me that this would even win them points, because when thinking about designing civilisation (or AGIs) you need to have accurate beliefs about this type of thing. (E.g. imagine dath ilani alignment researchers being like “here are all our principles for understanding intelligence” and then continually being surprised, like Keltham is, about how messy and fractally unprincipled some plausible outcomes are.)