No. If there’s a coinflip that determines whether an identical copy of me is created tomorrow, my ability to perfectly coordinate the actions of all copies (logical counterfactuals) doesn’t help me at all with figuring out if I should value the well-being of these copies with SIA, SSA or some other rule.
That sounds a lot like Stuart Armstrong’s view. I disagree with it, although perhaps our differences are merely definitional rather than substantive. I consider questions of morality or axiology seperate from questions of decision theory. I believe that the best way to model things is such that agents only care about their own overall utility function, however, this is a combination of direct utility (utility the agent experiences directly) and indirect utility (value assigned by the agent to the overall world state excluding the agent, but including other agents). So from my perspective this falls outside of the question of anthropics. (The only cases where this breaks down are Evil Genie like problems where there is no clear referent for “you”)
I consider questions of morality or axiology seperate from questions of decision theory.
The claim is essentially that specification of anthropic principles an agent follows belongs to axiology, not decision theory. That is, the orthogonality thesis applies to the distinction, so that different agents may follow different anthropic principles in the same way as different stuff-maximizers may maximize different kinds of stuff. Some things discussed under the umbrella of “anthropics” seem relevant to decision theory, such as being able to function with most anthropic principles, but not, say, choice between SIA and SSA.
(I somewhat disagree with the claim, as structuring values around instances of agents doesn’t seem natural, maps/worlds are more basic than agents. But that is disagreement with emphasizing the whole concept of anthropics, perhaps even with emphasizing agents, not with where to put the concepts between axiology and decision theory.)
Hmm… interesting point. I’ve briefly skimmed Stuart Armstrong’s paper and the claim that different moralities end up as different anthropic theories assuming that you care about all of your clones seems to be mistaking a cool calculation trick as having some deeper meaning, which does not automatically follow without further justification.
On reflection, what I said above doesn’t perfectly capture my views. I don’t want to draw the boundary so that anything in axiology is automatically not a part of anthropics. Instead, I’m just trying to abstract out questions about how desirable other people and states of the world are, so that we can just focus on building a decision theory on top of this. On the other hand, I consider axiology relevant in so far as it relates directly to “you”.
For example, in Evil Genie like situations, you might find out that if you had chosen A instead of B, it would have contradicted your existence and the task of trying to value this seems relevant to anthropics. And I still don’t know precisely where I stand on these problems, but I’m definitely open to the possibility that this is orthogonal to other questions of value. PS. I’m not even sure at this stage whether Evil Genie problems most naturally fall into anthropics or a seperate class of problems.
I also agree that structuring values around instances of agents seems unnatural, but I’d suggest discussing agent-instances instead of map/worlds.
No. If there’s a coinflip that determines whether an identical copy of me is created tomorrow, my ability to perfectly coordinate the actions of all copies (logical counterfactuals) doesn’t help me at all with figuring out if I should value the well-being of these copies with SIA, SSA or some other rule.
That sounds a lot like Stuart Armstrong’s view. I disagree with it, although perhaps our differences are merely definitional rather than substantive. I consider questions of morality or axiology seperate from questions of decision theory. I believe that the best way to model things is such that agents only care about their own overall utility function, however, this is a combination of direct utility (utility the agent experiences directly) and indirect utility (value assigned by the agent to the overall world state excluding the agent, but including other agents). So from my perspective this falls outside of the question of anthropics. (The only cases where this breaks down are Evil Genie like problems where there is no clear referent for “you”)
The claim is essentially that specification of anthropic principles an agent follows belongs to axiology, not decision theory. That is, the orthogonality thesis applies to the distinction, so that different agents may follow different anthropic principles in the same way as different stuff-maximizers may maximize different kinds of stuff. Some things discussed under the umbrella of “anthropics” seem relevant to decision theory, such as being able to function with most anthropic principles, but not, say, choice between SIA and SSA.
(I somewhat disagree with the claim, as structuring values around instances of agents doesn’t seem natural, maps/worlds are more basic than agents. But that is disagreement with emphasizing the whole concept of anthropics, perhaps even with emphasizing agents, not with where to put the concepts between axiology and decision theory.)
Hmm… interesting point. I’ve briefly skimmed Stuart Armstrong’s paper and the claim that different moralities end up as different anthropic theories assuming that you care about all of your clones seems to be mistaking a cool calculation trick as having some deeper meaning, which does not automatically follow without further justification.
On reflection, what I said above doesn’t perfectly capture my views. I don’t want to draw the boundary so that anything in axiology is automatically not a part of anthropics. Instead, I’m just trying to abstract out questions about how desirable other people and states of the world are, so that we can just focus on building a decision theory on top of this. On the other hand, I consider axiology relevant in so far as it relates directly to “you”.
For example, in Evil Genie like situations, you might find out that if you had chosen A instead of B, it would have contradicted your existence and the task of trying to value this seems relevant to anthropics. And I still don’t know precisely where I stand on these problems, but I’m definitely open to the possibility that this is orthogonal to other questions of value. PS. I’m not even sure at this stage whether Evil Genie problems most naturally fall into anthropics or a seperate class of problems.
I also agree that structuring values around instances of agents seems unnatural, but I’d suggest discussing agent-instances instead of map/worlds.
Yeah, looks like a definitional disagreement.
So do you think logical counterfactuals would solve anthropics given my definition of the scope?
I don’t know your definition.