“the sort of argument that would move a rational agent” = “serves as evidence for truth”.
I think these are not at all the same, and that using the word “acceptable” makes you more likely to make this particular bucket error. In short: serves as evidence for truth to whom?
In practice what assuming these are the same looks like, in the LW epistemic game, is 1) pretending that we’re all rational agents and 2) therefore we should only individually accept arguments that make sense to all of us. But in fact the sorts of arguments that would move me are not the sorts of arguments that would move you (and whether or not we’re rational agents we still need to decide which arguments move us), which is why I can feel very confident that a decision I’m making is a good idea even though I may not be able to phrase my internal arguments for doing so in a way that would be satisfying to you. Optimizing for satisfying all of LW is straight up Goodharting on social approval.
However, it seems not useful to point it out without saying precisely which error is being made.
There’s at least one error I can point to easily because someone else already did the hard work of pinning it down: I think the error that is being committed in the Hero Licensing dialogue is a default outcome of the LW epistemic game.
When you start from something highly optimized (as I do believe our community standards to be, relatively speaking) and make a random change, you are far more likely to do harm than good.
The changes I want to make are not random, and I don’t believe that the LW epistemic game is highly optimized for the right thing.
The opposite is also not clear.
So, let’s back up. The reason we started talking about this science vs. religion thing is because you objected to my description of the LW epistemic game. I think we got a little lost and meandered too far from this objection. The point I understood you to try to be making was that the LW epistemic game is not just a game, it’s also supposed to help us be truth-seeking. And the point I was trying to make in response is that to the extent that it’s not perfectly truth-seeking (which it is certainly not), this fact is worth pointing out. Are we on the same page about all that?
I agree that you can make arguments that appeal to people that have a particular intuition and don’t appeal to people that don’t have this intuition. Although it is also possible to point out explicitly that you are relying on this intuition and that convincing the rest would require digging deeper, so to speak. I’m not sure whether the essence of your claim is that people on LW take ill to that kind of arugments?
I admit that I haven’t read the entire “hero licensing” essay but my impression was that is hammering home the same thesis that already appears in “inadequate equilibria”, namely that “epistemic modesty” as often practiced is a product of status games rather than a principle of rationality. But I don’t really understand why you think it’s “a default outcome of the LW epistemic game”. Can you expand?
Yes, the “LW epistemic games” is not perfectly truth-seeking. Nothing that humans do is perfectly truth-seeking. Since (I think) virtually nobody thinks that it is perfectly truth seeking, it’s mostly worth pointing out only inasmuch as you also explain how it is not truth-seeking and in what direction it would have to change in order to become more so.
I think these are not at all the same, and that using the word “acceptable” makes you more likely to make this particular bucket error. In short: serves as evidence for truth to whom?
In practice what assuming these are the same looks like, in the LW epistemic game, is 1) pretending that we’re all rational agents and 2) therefore we should only individually accept arguments that make sense to all of us. But in fact the sorts of arguments that would move me are not the sorts of arguments that would move you (and whether or not we’re rational agents we still need to decide which arguments move us), which is why I can feel very confident that a decision I’m making is a good idea even though I may not be able to phrase my internal arguments for doing so in a way that would be satisfying to you. Optimizing for satisfying all of LW is straight up Goodharting on social approval.
There’s at least one error I can point to easily because someone else already did the hard work of pinning it down: I think the error that is being committed in the Hero Licensing dialogue is a default outcome of the LW epistemic game.
The changes I want to make are not random, and I don’t believe that the LW epistemic game is highly optimized for the right thing.
So, let’s back up. The reason we started talking about this science vs. religion thing is because you objected to my description of the LW epistemic game. I think we got a little lost and meandered too far from this objection. The point I understood you to try to be making was that the LW epistemic game is not just a game, it’s also supposed to help us be truth-seeking. And the point I was trying to make in response is that to the extent that it’s not perfectly truth-seeking (which it is certainly not), this fact is worth pointing out. Are we on the same page about all that?
I agree that you can make arguments that appeal to people that have a particular intuition and don’t appeal to people that don’t have this intuition. Although it is also possible to point out explicitly that you are relying on this intuition and that convincing the rest would require digging deeper, so to speak. I’m not sure whether the essence of your claim is that people on LW take ill to that kind of arugments?
I admit that I haven’t read the entire “hero licensing” essay but my impression was that is hammering home the same thesis that already appears in “inadequate equilibria”, namely that “epistemic modesty” as often practiced is a product of status games rather than a principle of rationality. But I don’t really understand why you think it’s “a default outcome of the LW epistemic game”. Can you expand?
Yes, the “LW epistemic games” is not perfectly truth-seeking. Nothing that humans do is perfectly truth-seeking. Since (I think) virtually nobody thinks that it is perfectly truth seeking, it’s mostly worth pointing out only inasmuch as you also explain how it is not truth-seeking and in what direction it would have to change in order to become more so.