Given the expansion-> torture effect, can you objectively determine when happy life stops and when endless torture begins, or is it possible that for any time C, there could be an alicorn who is not being tortured?
You say “objectively,” but unless you confirm that’s what you meant, I’ll assume you meant “with sufficient evidence to justify,” because I think the latter is what you meant.
There is some uncertainty as to when happiness would end and pain would begin (we’re dealing with a fictional universe with unknown (and probably undecided) physical laws, for Adun’s sake), but the wikipedia article on the big rip says that if it is indeed the fate of the universe, space will stretch infinitely much in a finite amount of time. If that’s the case, then it ind of makes no sense for an alicorn to survive. It’s like an unstoppable force and an immovable object. But if alicorns are alive at the moments leading up to that point, the stretching on them will grow to whatever level it needs to (and quickly) for it to be torturous. And if they are somehow alive after that point (not sure what that would mean even) “The torture gets less then” sounds like the least sensible option.
So maybe you can’t say exactly when every alicorn is getting tortured, but given that the big rip happens, every possibility is bad. And even if a few alicorns weren’t tortured (while most were), unless back in Equestria billions of years earlier you could foresee which ones they were gonna be and only make those ones immortal, it’s better to kill everyone (or everyone except Luna, Celestia, Cadence, and Twilight, who are fortunately all female and perhaps unable to continue ponykind (unless magic works like it does in a certain fanfic I read...)) than to let that happen.
I meant objectively, unless you are limiting your response to whether you personally would create alicorns; I asked if it should be permissible for alicorns to be made.
Suppose the physical laws are not well enough determined to be sure if the big rip happens, or if there is promising developments in magic that have some chance of preventing it, or some other reason why it is unsure if the happy fun time will be finite or not. How large a chance of “finite good time followed by eternity of torture” is acceptable, if the remainder of the probability space is “infinite good time”? I think multiplication does no good here.
Multiplication does plenty of good. Pretend instead of an infinite time it’s a finite time, X after everything else. Make the decision you would make in the limit as X → infinity.
As for tradeoff rates between torture/happiness, it depends to some extent on the individual. There are cases where I would let someone live (because I thought that was the right thing to do, not because of intuitive deontological constraints) who wanted to risk torture, while I myself would commit suicide. I just wouldn’t let people risk torture because of time discounting, or because they refused to imagine how bad it would be.
As for my own tradeoff rate, it’s not something I can report without deep introspection, Fermi estimates of how bad torture could be and how good “happy fun time” could be, and binary searching with thought experiments to find which I would prefer, which I don’t have time for now.
There is epsilon chance of “infinite bad time” and ~100% chance of finite benefit if you make choice A. If you instead make choice B there is 80% of “infinite bad time” and 20% chance of 10x the finite benefits.
Clearly, you should take course B, because it’s -infinity +10/5 instead of -infinity +1
EDIT: I was originally referring to multiplying a finite probability by an infinite negative.
If this seems counterintuitive, it’s because you can’t really just go ahead and imagine something being 10x better. It’s not because of the infinities involved. Substitute “infinite” for Graham’s number of years and it’s basically the same.
I know there are weird paradoxes involving infinite value, but none of them seem to be showing up in this problem, and I’ve got to decide somehow, so I might as well multiply. It’s not like I have a better backup algorithm for when multiplication fails.
You can multiply by Graham’s number and get a meaningful result. Try finding the expected return of four possible distributions: One in which you have epsilon chance of infinite negative utility and a 1-epsilon chance of doubleplusgood utility, one in which you have 95% chance of -infinity and 5% chance of plusgood, one in which you have epsilon chance of Graham’s number negative utility and a 1-epsilon chance of doubleplusgood utility, and one where you have 100% chance of doubleplusungood utility.
Consider the case where epsilon is BB(Graham’s number).
The first has an expected utility of -infinity, the second has the same value, but the third has an expected value of roughly doubleplusgood, despite having identical outcomes to the first one.
The first time you said “objectively” you asked if I could objectively determine the boundary between happy life and torture, and now in this post you’re talking about objective/subjective permissibility.
In the first case, limits on how precisely you can tell the happy life/torture boundary are based on uncertainty about the physical details of the possible future, and vagueness in the definitions of “happy” and “torture.” It’s not that in asking the question “When is that time?” there is a hidden reference to some feature of an external person (such as their utility function, or their taste in food) So I’m not sure what could be subjective of the first case.
As for whether it’s objectively permissible, A: I don’t believe in objective morality, because it runs afoul of Occam’s razor (It probably takes a lot of bits to specify what deserves to be a potential target of moral concern. A LOT of bits). and B: even if moral realism was correct, I wouldn’t give a damn (Felicifia doesn’t links to individual comments, so the best I can give is a link to the thread, but see my first comment).
If you can’t provide an upper bound of how long each pony will enjoy life before it becomes too entropic, then you can’t prove that it will become too entropic to enjoy for every pony in finite time.
The part where the scale factor gets to infinity is such an upper bound. Also, even if I didn’t have an upper bound a probability distribution is all I need to make decisions. You can’t always have “proof.”
Only if I would choose death given your probability distribution (or the only reason I wouldn’t is because of something like time discounting or not really imagining how bad it would be), and your probability distribution is more correct than mine.
So, if you discount time differently from me in a specific manner, it becomes mandatory for me (given the chance) to either condemn you to death or condemn you to eternal torture, and vice versa?
If a very long life followed by eternal torture is good for me but bad for you, I must condemn you to the torture and you must condemn me to death, rather than letting each of us decide for ourselves with full knowledge?
That’s exactly the opposite of what should happen; I should provide for you the option that you prefer you to have.
Given the expansion-> torture effect, can you objectively determine when happy life stops and when endless torture begins, or is it possible that for any time C, there could be an alicorn who is not being tortured?
You say “objectively,” but unless you confirm that’s what you meant, I’ll assume you meant “with sufficient evidence to justify,” because I think the latter is what you meant.
There is some uncertainty as to when happiness would end and pain would begin (we’re dealing with a fictional universe with unknown (and probably undecided) physical laws, for Adun’s sake), but the wikipedia article on the big rip says that if it is indeed the fate of the universe, space will stretch infinitely much in a finite amount of time. If that’s the case, then it ind of makes no sense for an alicorn to survive. It’s like an unstoppable force and an immovable object. But if alicorns are alive at the moments leading up to that point, the stretching on them will grow to whatever level it needs to (and quickly) for it to be torturous. And if they are somehow alive after that point (not sure what that would mean even) “The torture gets less then” sounds like the least sensible option.
So maybe you can’t say exactly when every alicorn is getting tortured, but given that the big rip happens, every possibility is bad. And even if a few alicorns weren’t tortured (while most were), unless back in Equestria billions of years earlier you could foresee which ones they were gonna be and only make those ones immortal, it’s better to kill everyone (or everyone except Luna, Celestia, Cadence, and Twilight, who are fortunately all female and perhaps unable to continue ponykind (unless magic works like it does in a certain fanfic I read...)) than to let that happen.
I meant objectively, unless you are limiting your response to whether you personally would create alicorns; I asked if it should be permissible for alicorns to be made.
Suppose the physical laws are not well enough determined to be sure if the big rip happens, or if there is promising developments in magic that have some chance of preventing it, or some other reason why it is unsure if the happy fun time will be finite or not. How large a chance of “finite good time followed by eternity of torture” is acceptable, if the remainder of the probability space is “infinite good time”? I think multiplication does no good here.
Multiplication does plenty of good. Pretend instead of an infinite time it’s a finite time, X after everything else. Make the decision you would make in the limit as X → infinity.
As for tradeoff rates between torture/happiness, it depends to some extent on the individual. There are cases where I would let someone live (because I thought that was the right thing to do, not because of intuitive deontological constraints) who wanted to risk torture, while I myself would commit suicide. I just wouldn’t let people risk torture because of time discounting, or because they refused to imagine how bad it would be.
As for my own tradeoff rate, it’s not something I can report without deep introspection, Fermi estimates of how bad torture could be and how good “happy fun time” could be, and binary searching with thought experiments to find which I would prefer, which I don’t have time for now.
There is epsilon chance of “infinite bad time” and ~100% chance of finite benefit if you make choice A. If you instead make choice B there is 80% of “infinite bad time” and 20% chance of 10x the finite benefits.
Clearly, you should take course B, because it’s -infinity +10/5 instead of -infinity +1
EDIT: I was originally referring to multiplying a finite probability by an infinite negative.
If this seems counterintuitive, it’s because you can’t really just go ahead and imagine something being 10x better. It’s not because of the infinities involved. Substitute “infinite” for Graham’s number of years and it’s basically the same.
I know there are weird paradoxes involving infinite value, but none of them seem to be showing up in this problem, and I’ve got to decide somehow, so I might as well multiply. It’s not like I have a better backup algorithm for when multiplication fails.
You can multiply by Graham’s number and get a meaningful result. Try finding the expected return of four possible distributions: One in which you have epsilon chance of infinite negative utility and a 1-epsilon chance of doubleplusgood utility, one in which you have 95% chance of -infinity and 5% chance of plusgood, one in which you have epsilon chance of Graham’s number negative utility and a 1-epsilon chance of doubleplusgood utility, and one where you have 100% chance of doubleplusungood utility.
Consider the case where epsilon is BB(Graham’s number).
The first has an expected utility of -infinity, the second has the same value, but the third has an expected value of roughly doubleplusgood, despite having identical outcomes to the first one.
There’s more in your post to respond to. I will later. For now I need to get some work done Today.
The first time you said “objectively” you asked if I could objectively determine the boundary between happy life and torture, and now in this post you’re talking about objective/subjective permissibility.
In the first case, limits on how precisely you can tell the happy life/torture boundary are based on uncertainty about the physical details of the possible future, and vagueness in the definitions of “happy” and “torture.” It’s not that in asking the question “When is that time?” there is a hidden reference to some feature of an external person (such as their utility function, or their taste in food) So I’m not sure what could be subjective of the first case.
As for whether it’s objectively permissible, A: I don’t believe in objective morality, because it runs afoul of Occam’s razor (It probably takes a lot of bits to specify what deserves to be a potential target of moral concern. A LOT of bits). and B: even if moral realism was correct, I wouldn’t give a damn (Felicifia doesn’t links to individual comments, so the best I can give is a link to the thread, but see my first comment).
If you can’t provide an upper bound of how long each pony will enjoy life before it becomes too entropic, then you can’t prove that it will become too entropic to enjoy for every pony in finite time.
The part where the scale factor gets to infinity is such an upper bound. Also, even if I didn’t have an upper bound a probability distribution is all I need to make decisions. You can’t always have “proof.”
If your probability distribution differs from mine, is it permissible for me to condemn you to death?
Only if I would choose death given your probability distribution (or the only reason I wouldn’t is because of something like time discounting or not really imagining how bad it would be), and your probability distribution is more correct than mine.
So, if you discount time differently from me in a specific manner, it becomes mandatory for me (given the chance) to either condemn you to death or condemn you to eternal torture, and vice versa?
If a very long life followed by eternal torture is good for me but bad for you, I must condemn you to the torture and you must condemn me to death, rather than letting each of us decide for ourselves with full knowledge?
That’s exactly the opposite of what should happen; I should provide for you the option that you prefer you to have.