(Assuming you’re read my other response you this comment):
I think it might help if I give a more general explanation of how my moral system can be used to determine what to do. This is mostly taken from the article, but it’s important enough that I think it should be restated.
Suppose you’re considering taking some action that would benefit our world or future life cone. You want to see what my ethical system recommends.
Well, for almost possible circumstances an agent could end up in in this universe, I think your action would have effectively no causal or acausal effect on them. There’s nothing you can do about them, so don’t worry about them in your moral deliberation.
Instead, consider agents of the form, “some agent in an Earth-like world (or in the future light-cone of one) with someone just like <insert detailed description of yourself and circumstances>”. These are agents you can potentially (acausally) affect. If you take an action to make the world a better place, that means the other people in the universe who are very similar to you and in very similar circumstances would also take that action.
So if you take that action, then you’d improve the world, so the expected value of life satisfaction of an agent in the above circumstances would be higher. Such circumstances are of finite complexity and not ruled out by evidence, so the probability of an agent ending up in such a situation, conditioning only on being in this universe, in non-zero. Thus, taking that action would increase the moral value of the universe and my ethical system would thus be liable to recommend taking that action.
To see it another way, moral deliberation with my ethical system works as follows:
I’m trying to make the universe a better place. Most agents are in situations in which I can’t do anything to affect them, whether causally or acausally. But there are some agents in situations that that I can (acausally) affect. So I’m going to focus on making the universe as satisfying as possible for those agents, using some impartial weighting over those possible circumstances.
Your comments are focusing on (so to speak) the decision-theoretic portion of your theory, the bit that would be different if you were using CDT or EDT rather than something FDT-like. That isn’t the part I’m whingeing about :-). (There surely are difficulties in formalizing any sort of FDT, but they are not my concern; I don’t think they have much to do with infinite ethics as such.)
My whingeing is about the part of your theory that seems specifically relevant to questions of infinite ethics, the part where you attempt to average over all experience-subjects. I think that one way or another this part runs into the usual average-of-things-that-don’t-have-an-average sort of problem which afflicts other attempts at infinite ethics.
As I describe in another comment, the approach I think you’re taking can move where that problem arises but not (so far as I can currently see) make it actually go away.
(Assuming you’re read my other response you this comment):
I think it might help if I give a more general explanation of how my moral system can be used to determine what to do. This is mostly taken from the article, but it’s important enough that I think it should be restated.
Suppose you’re considering taking some action that would benefit our world or future life cone. You want to see what my ethical system recommends.
Well, for almost possible circumstances an agent could end up in in this universe, I think your action would have effectively no causal or acausal effect on them. There’s nothing you can do about them, so don’t worry about them in your moral deliberation.
Instead, consider agents of the form, “some agent in an Earth-like world (or in the future light-cone of one) with someone just like <insert detailed description of yourself and circumstances>”. These are agents you can potentially (acausally) affect. If you take an action to make the world a better place, that means the other people in the universe who are very similar to you and in very similar circumstances would also take that action.
So if you take that action, then you’d improve the world, so the expected value of life satisfaction of an agent in the above circumstances would be higher. Such circumstances are of finite complexity and not ruled out by evidence, so the probability of an agent ending up in such a situation, conditioning only on being in this universe, in non-zero. Thus, taking that action would increase the moral value of the universe and my ethical system would thus be liable to recommend taking that action.
To see it another way, moral deliberation with my ethical system works as follows:
Your comments are focusing on (so to speak) the decision-theoretic portion of your theory, the bit that would be different if you were using CDT or EDT rather than something FDT-like. That isn’t the part I’m whingeing about :-). (There surely are difficulties in formalizing any sort of FDT, but they are not my concern; I don’t think they have much to do with infinite ethics as such.)
My whingeing is about the part of your theory that seems specifically relevant to questions of infinite ethics, the part where you attempt to average over all experience-subjects. I think that one way or another this part runs into the usual average-of-things-that-don’t-have-an-average sort of problem which afflicts other attempts at infinite ethics.
As I describe in another comment, the approach I think you’re taking can move where that problem arises but not (so far as I can currently see) make it actually go away.