The first time you said “objectively” you asked if I could objectively determine the boundary between happy life and torture, and now in this post you’re talking about objective/subjective permissibility.
In the first case, limits on how precisely you can tell the happy life/torture boundary are based on uncertainty about the physical details of the possible future, and vagueness in the definitions of “happy” and “torture.” It’s not that in asking the question “When is that time?” there is a hidden reference to some feature of an external person (such as their utility function, or their taste in food) So I’m not sure what could be subjective of the first case.
As for whether it’s objectively permissible, A: I don’t believe in objective morality, because it runs afoul of Occam’s razor (It probably takes a lot of bits to specify what deserves to be a potential target of moral concern. A LOT of bits). and B: even if moral realism was correct, I wouldn’t give a damn (Felicifia doesn’t links to individual comments, so the best I can give is a link to the thread, but see my first comment).
If you can’t provide an upper bound of how long each pony will enjoy life before it becomes too entropic, then you can’t prove that it will become too entropic to enjoy for every pony in finite time.
The part where the scale factor gets to infinity is such an upper bound. Also, even if I didn’t have an upper bound a probability distribution is all I need to make decisions. You can’t always have “proof.”
Only if I would choose death given your probability distribution (or the only reason I wouldn’t is because of something like time discounting or not really imagining how bad it would be), and your probability distribution is more correct than mine.
So, if you discount time differently from me in a specific manner, it becomes mandatory for me (given the chance) to either condemn you to death or condemn you to eternal torture, and vice versa?
If a very long life followed by eternal torture is good for me but bad for you, I must condemn you to the torture and you must condemn me to death, rather than letting each of us decide for ourselves with full knowledge?
That’s exactly the opposite of what should happen; I should provide for you the option that you prefer you to have.
The first time you said “objectively” you asked if I could objectively determine the boundary between happy life and torture, and now in this post you’re talking about objective/subjective permissibility.
In the first case, limits on how precisely you can tell the happy life/torture boundary are based on uncertainty about the physical details of the possible future, and vagueness in the definitions of “happy” and “torture.” It’s not that in asking the question “When is that time?” there is a hidden reference to some feature of an external person (such as their utility function, or their taste in food) So I’m not sure what could be subjective of the first case.
As for whether it’s objectively permissible, A: I don’t believe in objective morality, because it runs afoul of Occam’s razor (It probably takes a lot of bits to specify what deserves to be a potential target of moral concern. A LOT of bits). and B: even if moral realism was correct, I wouldn’t give a damn (Felicifia doesn’t links to individual comments, so the best I can give is a link to the thread, but see my first comment).
If you can’t provide an upper bound of how long each pony will enjoy life before it becomes too entropic, then you can’t prove that it will become too entropic to enjoy for every pony in finite time.
The part where the scale factor gets to infinity is such an upper bound. Also, even if I didn’t have an upper bound a probability distribution is all I need to make decisions. You can’t always have “proof.”
If your probability distribution differs from mine, is it permissible for me to condemn you to death?
Only if I would choose death given your probability distribution (or the only reason I wouldn’t is because of something like time discounting or not really imagining how bad it would be), and your probability distribution is more correct than mine.
So, if you discount time differently from me in a specific manner, it becomes mandatory for me (given the chance) to either condemn you to death or condemn you to eternal torture, and vice versa?
If a very long life followed by eternal torture is good for me but bad for you, I must condemn you to the torture and you must condemn me to death, rather than letting each of us decide for ourselves with full knowledge?
That’s exactly the opposite of what should happen; I should provide for you the option that you prefer you to have.