I hope even in the LW epistemic frame we can appreciate how dangerous it is to conflate “acceptable,” which is fundamentally a social notion, and “serve[s] as evidence for truth.” The point of being able to Look at the LW epistemic game, from within the point of view of the LW epistemic game, is precisely to see the ways in which playing it well is Goodharting on truth-seeking.
Your comment sounds like someone who is comparing science and religion and saying that, both are just “patterns in a social web” that deem different sort of arguments as acceptable.
Yes, that’s a sort of thing I might say.
However, they are not really symmetric. One of them is more correct than the other.
I get the feeling that you think you’ve said a simple thing, but actually the thing you’ve said is very complicated and deserves to be unpacked in much greater detail. In short: more correct about what? And why does being correct about those things matter?
The scientific frame is not just a collection of methods for finding truths, it’s also a collection of implicit values about what sorts of truths are worth finding out. Science clearly beats religion at finding out truths about things like how to predict the weather or make computers. But it’s much less clear that it beats religion at finding out truths about things like how to live a good life or make good communities of humans that actually work.
I hope even in the LW epistemic frame we can appreciate how dangerous it is to conflate “acceptable,” which is fundamentally a social notion, and “serve[s] as evidence for truth.”
I am not conflating them, I am using the word “acceptable” to mean “the sort of argument that would move a rational agent” = “serves as evidence for truth”. Of course our community standards are likely to fall far short from the hypothetical standards of ideal rational agents. However, it seems not useful to point it out without saying precisely which error is being made. When you start from something highly optimized (as I do believe our community standards to be, relatively speaking) and make a random change, you are far more likely to do harm than good.
I get the feeling that you think you’ve said a simple thing, but actually the thing you’ve said is very complicated and deserves to be unpacked in much greater detail.
I don’t know why you feel that I think that I said a simple thing (we’re breaking Yudkowsky’s Law of Ultrafinite Recursion here). I do think I said something that is relatively uncontroversial in this community. But alright, let’s start unpacking.
Science clearly beats religion at finding out truths about things like how to predict the weather or make computers. But it’s much less clear that it beats religion at finding out truths about things like how to live a good life or make good communities of humans that actually work.
The opposite is also not clear. Religion is responsible for many, many atrocities, both on a grand scale (witch hunts, crusades, jihads, pogroms etc.) and on a moderate scale (e.g. persecution of sexual minorities, upholding patriarchal social orders and justifying authoritarian regimes). These atrocities were ameliorated in the modern age to a large extent thanks to the disillusionment with religion brought about by the advent of science. Moreover, religion doesn’t really consciously set out to find truths about making good communities. It seems to be more a side effect of religion, since a group of people with common beliefs and traditions is naturally more cohesive than a group of people without such. I think that if we do set out to find these truths, we would be well advised to use the methods of science (e.g. empiricism and mathematical models) rather than the methods of religion (i.e. dogma).
“the sort of argument that would move a rational agent” = “serves as evidence for truth”.
I think these are not at all the same, and that using the word “acceptable” makes you more likely to make this particular bucket error. In short: serves as evidence for truth to whom?
In practice what assuming these are the same looks like, in the LW epistemic game, is 1) pretending that we’re all rational agents and 2) therefore we should only individually accept arguments that make sense to all of us. But in fact the sorts of arguments that would move me are not the sorts of arguments that would move you (and whether or not we’re rational agents we still need to decide which arguments move us), which is why I can feel very confident that a decision I’m making is a good idea even though I may not be able to phrase my internal arguments for doing so in a way that would be satisfying to you. Optimizing for satisfying all of LW is straight up Goodharting on social approval.
However, it seems not useful to point it out without saying precisely which error is being made.
There’s at least one error I can point to easily because someone else already did the hard work of pinning it down: I think the error that is being committed in the Hero Licensing dialogue is a default outcome of the LW epistemic game.
When you start from something highly optimized (as I do believe our community standards to be, relatively speaking) and make a random change, you are far more likely to do harm than good.
The changes I want to make are not random, and I don’t believe that the LW epistemic game is highly optimized for the right thing.
The opposite is also not clear.
So, let’s back up. The reason we started talking about this science vs. religion thing is because you objected to my description of the LW epistemic game. I think we got a little lost and meandered too far from this objection. The point I understood you to try to be making was that the LW epistemic game is not just a game, it’s also supposed to help us be truth-seeking. And the point I was trying to make in response is that to the extent that it’s not perfectly truth-seeking (which it is certainly not), this fact is worth pointing out. Are we on the same page about all that?
I agree that you can make arguments that appeal to people that have a particular intuition and don’t appeal to people that don’t have this intuition. Although it is also possible to point out explicitly that you are relying on this intuition and that convincing the rest would require digging deeper, so to speak. I’m not sure whether the essence of your claim is that people on LW take ill to that kind of arugments?
I admit that I haven’t read the entire “hero licensing” essay but my impression was that is hammering home the same thesis that already appears in “inadequate equilibria”, namely that “epistemic modesty” as often practiced is a product of status games rather than a principle of rationality. But I don’t really understand why you think it’s “a default outcome of the LW epistemic game”. Can you expand?
Yes, the “LW epistemic games” is not perfectly truth-seeking. Nothing that humans do is perfectly truth-seeking. Since (I think) virtually nobody thinks that it is perfectly truth seeking, it’s mostly worth pointing out only inasmuch as you also explain how it is not truth-seeking and in what direction it would have to change in order to become more so.
I hope even in the LW epistemic frame we can appreciate how dangerous it is to conflate “acceptable,” which is fundamentally a social notion, and “serve[s] as evidence for truth.” The point of being able to Look at the LW epistemic game, from within the point of view of the LW epistemic game, is precisely to see the ways in which playing it well is Goodharting on truth-seeking.
Yes, that’s a sort of thing I might say.
I get the feeling that you think you’ve said a simple thing, but actually the thing you’ve said is very complicated and deserves to be unpacked in much greater detail. In short: more correct about what? And why does being correct about those things matter?
The scientific frame is not just a collection of methods for finding truths, it’s also a collection of implicit values about what sorts of truths are worth finding out. Science clearly beats religion at finding out truths about things like how to predict the weather or make computers. But it’s much less clear that it beats religion at finding out truths about things like how to live a good life or make good communities of humans that actually work.
I am not conflating them, I am using the word “acceptable” to mean “the sort of argument that would move a rational agent” = “serves as evidence for truth”. Of course our community standards are likely to fall far short from the hypothetical standards of ideal rational agents. However, it seems not useful to point it out without saying precisely which error is being made. When you start from something highly optimized (as I do believe our community standards to be, relatively speaking) and make a random change, you are far more likely to do harm than good.
I don’t know why you feel that I think that I said a simple thing (we’re breaking Yudkowsky’s Law of Ultrafinite Recursion here). I do think I said something that is relatively uncontroversial in this community. But alright, let’s start unpacking.
The opposite is also not clear. Religion is responsible for many, many atrocities, both on a grand scale (witch hunts, crusades, jihads, pogroms etc.) and on a moderate scale (e.g. persecution of sexual minorities, upholding patriarchal social orders and justifying authoritarian regimes). These atrocities were ameliorated in the modern age to a large extent thanks to the disillusionment with religion brought about by the advent of science. Moreover, religion doesn’t really consciously set out to find truths about making good communities. It seems to be more a side effect of religion, since a group of people with common beliefs and traditions is naturally more cohesive than a group of people without such. I think that if we do set out to find these truths, we would be well advised to use the methods of science (e.g. empiricism and mathematical models) rather than the methods of religion (i.e. dogma).
I think these are not at all the same, and that using the word “acceptable” makes you more likely to make this particular bucket error. In short: serves as evidence for truth to whom?
In practice what assuming these are the same looks like, in the LW epistemic game, is 1) pretending that we’re all rational agents and 2) therefore we should only individually accept arguments that make sense to all of us. But in fact the sorts of arguments that would move me are not the sorts of arguments that would move you (and whether or not we’re rational agents we still need to decide which arguments move us), which is why I can feel very confident that a decision I’m making is a good idea even though I may not be able to phrase my internal arguments for doing so in a way that would be satisfying to you. Optimizing for satisfying all of LW is straight up Goodharting on social approval.
There’s at least one error I can point to easily because someone else already did the hard work of pinning it down: I think the error that is being committed in the Hero Licensing dialogue is a default outcome of the LW epistemic game.
The changes I want to make are not random, and I don’t believe that the LW epistemic game is highly optimized for the right thing.
So, let’s back up. The reason we started talking about this science vs. religion thing is because you objected to my description of the LW epistemic game. I think we got a little lost and meandered too far from this objection. The point I understood you to try to be making was that the LW epistemic game is not just a game, it’s also supposed to help us be truth-seeking. And the point I was trying to make in response is that to the extent that it’s not perfectly truth-seeking (which it is certainly not), this fact is worth pointing out. Are we on the same page about all that?
I agree that you can make arguments that appeal to people that have a particular intuition and don’t appeal to people that don’t have this intuition. Although it is also possible to point out explicitly that you are relying on this intuition and that convincing the rest would require digging deeper, so to speak. I’m not sure whether the essence of your claim is that people on LW take ill to that kind of arugments?
I admit that I haven’t read the entire “hero licensing” essay but my impression was that is hammering home the same thesis that already appears in “inadequate equilibria”, namely that “epistemic modesty” as often practiced is a product of status games rather than a principle of rationality. But I don’t really understand why you think it’s “a default outcome of the LW epistemic game”. Can you expand?
Yes, the “LW epistemic games” is not perfectly truth-seeking. Nothing that humans do is perfectly truth-seeking. Since (I think) virtually nobody thinks that it is perfectly truth seeking, it’s mostly worth pointing out only inasmuch as you also explain how it is not truth-seeking and in what direction it would have to change in order to become more so.