I actually suspect that this is a more general disagreement—I think that, in complicated domains, the approach of “figure out what things work locally, do those things, and iterate” outperforms the approach of “look at the problem, work really hard on coming up with an explicit model of the reward landscape, and then do the optimal thing according to your model”.
is to first order probably the most general crux in whether you view LW as a useful thing, perhaps the most important useful thing, or whether you see LW as essentially worthless.
It strikes at the core of the LessWrong worldview, so it’s natural that such deep differences result in different predictions.
To be clear, I think you can sensibly disagree with people on LW about AI risk being high or a real thing, as well as other issues I haven’t looked at even under a worldview which agrees with “look at the problem, work really hard on coming up with an explicit model of the reward landscape, and then do the optimal thing according to your model”, but I think a lot of topics on LW make a lot more sense if you fundamentally buy the worldview under which model making is most important compared to iteration.
I think this claim:
is to first order probably the most general crux in whether you view LW as a useful thing, perhaps the most important useful thing, or whether you see LW as essentially worthless.
It strikes at the core of the LessWrong worldview, so it’s natural that such deep differences result in different predictions.
To be clear, I think you can sensibly disagree with people on LW about AI risk being high or a real thing, as well as other issues I haven’t looked at even under a worldview which agrees with “look at the problem, work really hard on coming up with an explicit model of the reward landscape, and then do the optimal thing according to your model”, but I think a lot of topics on LW make a lot more sense if you fundamentally buy the worldview under which model making is most important compared to iteration.