Changing one’s mind on P(doom) can be useful for people comparing across cause areas (e.g. Open Phil), but it’s not all that important for me and was not one of my goals.
Generally when people have big disagreements about some high-level question like P(doom), it means that they have very different underlying models that drive their reasoning within that domain. The main goal (for me) is to acquire underlying models that I can then use in the future.
Acquiring a new underlying model that I actually believe would probably be more important than the rest of my work in a full year combined. It would typically have significant implications on what sorts of proposals can and cannot work, and would influence what research I do for years to come. In the case of Eliezer’s model specifically, it would completely change what research I do, since Eliezer’s model specifically predicts that the research I do is useless (I think).
I didn’t particularly expect to actually acquire a new model that I believed from these conversations, but there was some probability of that, and I did expect that I would learn at least a few new things I hadn’t previously considered. I’m unfortunately quite bad at noticing my own “updates”, so I can’t easily point to examples. That being said, I’m confident that I would now be significantly better at Eliezer’s ITT than before the conversations.
Changing one’s mind on P(doom) can be useful for people comparing across cause areas (e.g. Open Phil), but it’s not all that important for me and was not one of my goals.
Generally when people have big disagreements about some high-level question like P(doom), it means that they have very different underlying models that drive their reasoning within that domain. The main goal (for me) is to acquire underlying models that I can then use in the future.
Acquiring a new underlying model that I actually believe would probably be more important than the rest of my work in a full year combined. It would typically have significant implications on what sorts of proposals can and cannot work, and would influence what research I do for years to come. In the case of Eliezer’s model specifically, it would completely change what research I do, since Eliezer’s model specifically predicts that the research I do is useless (I think).
I didn’t particularly expect to actually acquire a new model that I believed from these conversations, but there was some probability of that, and I did expect that I would learn at least a few new things I hadn’t previously considered. I’m unfortunately quite bad at noticing my own “updates”, so I can’t easily point to examples. That being said, I’m confident that I would now be significantly better at Eliezer’s ITT than before the conversations.