I consider questions of morality or axiology seperate from questions of decision theory.
The claim is essentially that specification of anthropic principles an agent follows belongs to axiology, not decision theory. That is, the orthogonality thesis applies to the distinction, so that different agents may follow different anthropic principles in the same way as different stuff-maximizers may maximize different kinds of stuff. Some things discussed under the umbrella of “anthropics” seem relevant to decision theory, such as being able to function with most anthropic principles, but not, say, choice between SIA and SSA.
(I somewhat disagree with the claim, as structuring values around instances of agents doesn’t seem natural, maps/worlds are more basic than agents. But that is disagreement with emphasizing the whole concept of anthropics, perhaps even with emphasizing agents, not with where to put the concepts between axiology and decision theory.)
Hmm… interesting point. I’ve briefly skimmed Stuart Armstrong’s paper and the claim that different moralities end up as different anthropic theories assuming that you care about all of your clones seems to be mistaking a cool calculation trick as having some deeper meaning, which does not automatically follow without further justification.
On reflection, what I said above doesn’t perfectly capture my views. I don’t want to draw the boundary so that anything in axiology is automatically not a part of anthropics. Instead, I’m just trying to abstract out questions about how desirable other people and states of the world are, so that we can just focus on building a decision theory on top of this. On the other hand, I consider axiology relevant in so far as it relates directly to “you”.
For example, in Evil Genie like situations, you might find out that if you had chosen A instead of B, it would have contradicted your existence and the task of trying to value this seems relevant to anthropics. And I still don’t know precisely where I stand on these problems, but I’m definitely open to the possibility that this is orthogonal to other questions of value. PS. I’m not even sure at this stage whether Evil Genie problems most naturally fall into anthropics or a seperate class of problems.
I also agree that structuring values around instances of agents seems unnatural, but I’d suggest discussing agent-instances instead of map/worlds.
The claim is essentially that specification of anthropic principles an agent follows belongs to axiology, not decision theory. That is, the orthogonality thesis applies to the distinction, so that different agents may follow different anthropic principles in the same way as different stuff-maximizers may maximize different kinds of stuff. Some things discussed under the umbrella of “anthropics” seem relevant to decision theory, such as being able to function with most anthropic principles, but not, say, choice between SIA and SSA.
(I somewhat disagree with the claim, as structuring values around instances of agents doesn’t seem natural, maps/worlds are more basic than agents. But that is disagreement with emphasizing the whole concept of anthropics, perhaps even with emphasizing agents, not with where to put the concepts between axiology and decision theory.)
Hmm… interesting point. I’ve briefly skimmed Stuart Armstrong’s paper and the claim that different moralities end up as different anthropic theories assuming that you care about all of your clones seems to be mistaking a cool calculation trick as having some deeper meaning, which does not automatically follow without further justification.
On reflection, what I said above doesn’t perfectly capture my views. I don’t want to draw the boundary so that anything in axiology is automatically not a part of anthropics. Instead, I’m just trying to abstract out questions about how desirable other people and states of the world are, so that we can just focus on building a decision theory on top of this. On the other hand, I consider axiology relevant in so far as it relates directly to “you”.
For example, in Evil Genie like situations, you might find out that if you had chosen A instead of B, it would have contradicted your existence and the task of trying to value this seems relevant to anthropics. And I still don’t know precisely where I stand on these problems, but I’m definitely open to the possibility that this is orthogonal to other questions of value. PS. I’m not even sure at this stage whether Evil Genie problems most naturally fall into anthropics or a seperate class of problems.
I also agree that structuring values around instances of agents seems unnatural, but I’d suggest discussing agent-instances instead of map/worlds.