How is this a bad/political example? I read this as a useful and interesting thought experiment on the implications of epiphenomena with respect to ecosystems.
Would you mind saying in non-metaphorical terms what you thought the point was? I think this would help produce a better picture of how hard it would have been to make the same point in a less inflammatory way.
Ecosystems, and organisms in them, generally don’t care about stuff that can’t be turned into power-within-the-ecosystem. Box two exists, but unless the members of box one can utilize box two for e.g. information/computation/communication, it doesn’t matter to anyone in box one.
Other places where this applies:
Highly competitive industries won’t care about externalities or the long-term future. Externalities and the future are in box two. They might not even be modeled.
Young people have a personal interest in making their life better when they’re older, but under sufficient competitive pressure (e.g. in competitive workplaces, or in status-based social groups), won’t do so. Nursing homes are box two.
People playing power games at a high level (e.g. in politics) will have a hard time caring about anything not directly relevant to the power game. Most of the actual effects are, from the perspective of the power game, in box two; those effects that actually are directly relevant get modeled as part of the power game itself, i.e. box one. Signing a bill is not about the policy effects, it’s about signalling, because the policy effects only affect the power game on a pretty long timescale (and likely won’t even be modeled from within the power game), and signalling affects it immediately.
(These examples are somewhat worse for making the point because the case is much more clear in the case of evolution; humans are sometimes rational agents that act non-ecologically)
I feel like the core example here has a long history of being argued for with increasingly strong anti-epistemologies, and so it feels like an especially strong example of a thing to not spend time trying to steelman. We should expect such arguments for it to reliably be really good at making us confused without there being a useful insight behind our confusion.
If the argument is just being used as an example to make an interesting point about, as you say, epiphenomena and selection processes, then I think there is probably a large swathe of examples that aren’t this particular example.
The point is really analytically simple, it doesn’t require steelmanning to understand, you can just read the post. You don’t need to use the outside view for arguments like this, you can just spend a small amount of effort trying to understand it (argument screens off authority). It isn’t even arguing positively for the existence of the afterlife, it’s at most arguing against one particularly weak argument against it.
Contrast this with, say, the ontological argument, which is not analytically simple, has obvious problems/counterexamples, and might be worth understanding more deeply, but likely isn’t worth the effort since (from an atheistic perspective) it’s likely on priors to be made in bad faith based on a motivated confusion.
In general if “politics is the mindkiller” is preventing you from considering all analytical arguments that you interpret as being on a particular side of a political issue, then “politics is the mindkiller” is likely mindkilling you more than politics itself. (My impression was that religion wasn’t even a significant political issue around here, since opinion is so near-unanimously against its literal truth...)
I don’t see a different example that makes the point as strongly and clearly, do you see one?
You may be right. It certainly seems likely to me that the author was just picking a narratively good example.
I did recently experience some arguments surprisingly similar to the one in the OP (things similar to this) definitely designed to be deeply confusing, and I was also incredibly surprised to find the environment I was in (not LW, but some thoughtful people) taking them seriously and being confused by them, which made me decrease my threshold for pointing out this type of cognitive route as bad and not-worth-exploring. I haven’t the time to think up as clear an example as the OP’s—as I say, it seems plausible that this one just is the most narratively simple. There are often religion-metaphors in abstract problems (e.g. decision theory) that are clearly natural to use.
You say you found the OP to be a useful thought experiment, and that already causes me to think I might be mistaken, I’m pretty sure the part of me that thought the example was bad also would predict you wouldn’t find it very useful.
I think the OP is more about evolution giving us irrational drives that override our intellect. For example, if someone believes that bungee jumping is safe but is still afraid to jump, their belief is right but their fear is wrong, so the fear shouldn’t be taken as a strong argument against the belief.
How is this a bad/political example? I read this as a useful and interesting thought experiment on the implications of epiphenomena with respect to ecosystems.
Would you mind saying in non-metaphorical terms what you thought the point was? I think this would help produce a better picture of how hard it would have been to make the same point in a less inflammatory way.
Ecosystems, and organisms in them, generally don’t care about stuff that can’t be turned into power-within-the-ecosystem. Box two exists, but unless the members of box one can utilize box two for e.g. information/computation/communication, it doesn’t matter to anyone in box one.
Other places where this applies:
Highly competitive industries won’t care about externalities or the long-term future. Externalities and the future are in box two. They might not even be modeled.
Young people have a personal interest in making their life better when they’re older, but under sufficient competitive pressure (e.g. in competitive workplaces, or in status-based social groups), won’t do so. Nursing homes are box two.
People playing power games at a high level (e.g. in politics) will have a hard time caring about anything not directly relevant to the power game. Most of the actual effects are, from the perspective of the power game, in box two; those effects that actually are directly relevant get modeled as part of the power game itself, i.e. box one. Signing a bill is not about the policy effects, it’s about signalling, because the policy effects only affect the power game on a pretty long timescale (and likely won’t even be modeled from within the power game), and signalling affects it immediately.
(These examples are somewhat worse for making the point because the case is much more clear in the case of evolution; humans are sometimes rational agents that act non-ecologically)
The examples are really clear and makes the OP much more interesting to me, thanks. I retract my criticism.
I feel like the core example here has a long history of being argued for with increasingly strong anti-epistemologies, and so it feels like an especially strong example of a thing to not spend time trying to steelman. We should expect such arguments for it to reliably be really good at making us confused without there being a useful insight behind our confusion.
If the argument is just being used as an example to make an interesting point about, as you say, epiphenomena and selection processes, then I think there is probably a large swathe of examples that aren’t this particular example.
The point is really analytically simple, it doesn’t require steelmanning to understand, you can just read the post. You don’t need to use the outside view for arguments like this, you can just spend a small amount of effort trying to understand it (argument screens off authority). It isn’t even arguing positively for the existence of the afterlife, it’s at most arguing against one particularly weak argument against it.
Contrast this with, say, the ontological argument, which is not analytically simple, has obvious problems/counterexamples, and might be worth understanding more deeply, but likely isn’t worth the effort since (from an atheistic perspective) it’s likely on priors to be made in bad faith based on a motivated confusion.
In general if “politics is the mindkiller” is preventing you from considering all analytical arguments that you interpret as being on a particular side of a political issue, then “politics is the mindkiller” is likely mindkilling you more than politics itself. (My impression was that religion wasn’t even a significant political issue around here, since opinion is so near-unanimously against its literal truth...)
I don’t see a different example that makes the point as strongly and clearly, do you see one?
You may be right. It certainly seems likely to me that the author was just picking a narratively good example.
I did recently experience some arguments surprisingly similar to the one in the OP (things similar to this) definitely designed to be deeply confusing, and I was also incredibly surprised to find the environment I was in (not LW, but some thoughtful people) taking them seriously and being confused by them, which made me decrease my threshold for pointing out this type of cognitive route as bad and not-worth-exploring. I haven’t the time to think up as clear an example as the OP’s—as I say, it seems plausible that this one just is the most narratively simple. There are often religion-metaphors in abstract problems (e.g. decision theory) that are clearly natural to use.
You say you found the OP to be a useful thought experiment, and that already causes me to think I might be mistaken, I’m pretty sure the part of me that thought the example was bad also would predict you wouldn’t find it very useful.
I think the OP is more about evolution giving us irrational drives that override our intellect. For example, if someone believes that bungee jumping is safe but is still afraid to jump, their belief is right but their fear is wrong, so the fear shouldn’t be taken as a strong argument against the belief.