Sure, but doesn’t that apply mostly uniformly to all decisions, making tradeoffs mostly the same, just having everything have a larger magnitude? I don’t see why this is the kind of decision that should be more influenced by that line of reasoning than e.g. my heuristics around choosing what to do with my career, how I choose romantic partners, whether to get exercise, etc.
The magnitude depends on the sizes of reference classes, which differ dramatically. So some personal decisions are suddenly much more important than others simply because more people make them, and so you should allocate more resources on deciding those things in particular correctly. Exercise regimen seems like a high acausal impact decision. Another difference is that the goal that the personal decisions pursue shifts from what you want to happen to yourself, to what you want to happen to other people, and this effect increases with population. (Edited the grandparent to express these points more clearly.)
Hmm, I think that’s being too shallow. My decisions aren’t made in isolation, they are the result of applying general principles and broad heuristics. The biggest impact should come other people adopting the same heuristics and principles, not just making the same object level decisions.
But if you spend more time thinking about exercise, that time cost is multiplied greatly. I think this kind of countereffect cancels out every practical argument of this type.
New information argues for a change on the margin, so the new equilibrium is different, though it may not be far away. The arguments are not “cancelled out”, but they do only have bounded impact. Compare with charity evaluation in effective altruism: if we take the impact of certain decisions as sufficiently significant, it calls for their organized study, so that the decisions are no longer made based on first impressions. On the other hand, if there is already enough infrastructure for making good decisions of that type, then significant changes are unnecessary.
In the case of acausal impact, large reference classes imply that at least that many people are already affected, so if organized evaluation of such decisions is feasible to set up, it’s probably already in place without any need for the acausal impact argument. So actual changes are probably in how you pay attention to info that’s already available, not in creating infrastructure for generating better info. On the other hand, a source of info about sizes of reference classes may be useful.
Sure, but doesn’t that apply mostly uniformly to all decisions, making tradeoffs mostly the same, just having everything have a larger magnitude? I don’t see why this is the kind of decision that should be more influenced by that line of reasoning than e.g. my heuristics around choosing what to do with my career, how I choose romantic partners, whether to get exercise, etc.
The magnitude depends on the sizes of reference classes, which differ dramatically. So some personal decisions are suddenly much more important than others simply because more people make them, and so you should allocate more resources on deciding those things in particular correctly. Exercise regimen seems like a high acausal impact decision. Another difference is that the goal that the personal decisions pursue shifts from what you want to happen to yourself, to what you want to happen to other people, and this effect increases with population. (Edited the grandparent to express these points more clearly.)
Hmm, I think that’s being too shallow. My decisions aren’t made in isolation, they are the result of applying general principles and broad heuristics. The biggest impact should come other people adopting the same heuristics and principles, not just making the same object level decisions.
That influences sizes of reference classes, but at some point the sizes cash out in morally relevant object level decisions.
But if you spend more time thinking about exercise, that time cost is multiplied greatly. I think this kind of countereffect cancels out every practical argument of this type.
New information argues for a change on the margin, so the new equilibrium is different, though it may not be far away. The arguments are not “cancelled out”, but they do only have bounded impact. Compare with charity evaluation in effective altruism: if we take the impact of certain decisions as sufficiently significant, it calls for their organized study, so that the decisions are no longer made based on first impressions. On the other hand, if there is already enough infrastructure for making good decisions of that type, then significant changes are unnecessary.
In the case of acausal impact, large reference classes imply that at least that many people are already affected, so if organized evaluation of such decisions is feasible to set up, it’s probably already in place without any need for the acausal impact argument. So actual changes are probably in how you pay attention to info that’s already available, not in creating infrastructure for generating better info. On the other hand, a source of info about sizes of reference classes may be useful.