I feel like scope insensitivity is something to worry about here. I’d be really happy to learn that humanity will manage to take good care of our cosmic endowment but my happiness wouldn’t scale properly with the amount of value at stake if I learned we took good care of a super-cosmic endowment. I think that’s the result of my inability to grasp the quantities involved rather than a true reflection of my extrapolated values, however.
My concern is more that reasoning about entities in simpler universes capable of conducting acausal trades with us will turn out to be totally intractable (as will the other proposed escape methods), but since I’m very uncertain about that I think it’s definitely worth further investigation. I’m also not convinced Tegmark’s MUH is true in the first place, but this post is making me want to do more reading on the arguments in favor & opposed. It looks like there was a Rationally Speaking episode about it?
When you’re faced with numbers like 3^^^3, scope insensitivity is the correct response. A googolplex is already enough to hold every possible configuration of Life as we know it. “Hamlet, but with extra commas in these three places, performed by intelligent starfish” is in there somewhere in over a googol different varieties. What, then, does 3^^^3 add except more copies of the same?
Nothing, if your definition of a copy is sufficiently general :-)
Am I understanding you right that you believe in something like a computational theory of identity and think there’s some sort of bound on how complex something we’d attribute moral patienthood or interestingness to can get? I agree with the former, but don’t see much reason for believing the latter.
I have no idea if there is such a bound. I will never have any idea if there is such a bound, and I suspect that neither will any entity in this universe. Given that fact, I’d rather make the assumption that doesn’t turn me stupid when Pascal’s Wager comes up.
I feel like scope insensitivity is something to worry about here. I’d be really happy to learn that humanity will manage to take good care of our cosmic endowment but my happiness wouldn’t scale properly with the amount of value at stake if I learned we took good care of a super-cosmic endowment. I think that’s the result of my inability to grasp the quantities involved rather than a true reflection of my extrapolated values, however.
My concern is more that reasoning about entities in simpler universes capable of conducting acausal trades with us will turn out to be totally intractable (as will the other proposed escape methods), but since I’m very uncertain about that I think it’s definitely worth further investigation. I’m also not convinced Tegmark’s MUH is true in the first place, but this post is making me want to do more reading on the arguments in favor & opposed. It looks like there was a Rationally Speaking episode about it?
When you’re faced with numbers like 3^^^3, scope insensitivity is the correct response. A googolplex is already enough to hold every possible configuration of Life as we know it. “Hamlet, but with extra commas in these three places, performed by intelligent starfish” is in there somewhere in over a googol different varieties. What, then, does 3^^^3 add except more copies of the same?
Nothing, if your definition of a copy is sufficiently general :-)
Am I understanding you right that you believe in something like a computational theory of identity and think there’s some sort of bound on how complex something we’d attribute moral patienthood or interestingness to can get? I agree with the former, but don’t see much reason for believing the latter.
I have no idea if there is such a bound. I will never have any idea if there is such a bound, and I suspect that neither will any entity in this universe. Given that fact, I’d rather make the assumption that doesn’t turn me stupid when Pascal’s Wager comes up.