To your first question: Yes. If something has one of two characteristics, but no information that we can (even theoretically) acquire allows us to determine which of those is true, then it is not meaningful to care about which one is true. Dropping to the object-level, it would be contradictory to have a simulation which accepted as input ONLY a set of initial conditions, but developed sentient life that was aware of you.
To your second question: “star systems that have become causally disconnected from our own” are distinguishable from our own. I’ll answer the question “Should be necessarily be ambivalent about things which we cannot even theoretically interact with” as a general case.
Utilitarian: Yes. (It has no effect on us) Consequentialist: Yes. (We have no effect on them) Social Contract: Only if we don’t have a deal with them. Deist: Only if God says so. Naive: Yes; I can’t know what they are, so I can’t change my decisions based on them.
What theory of ethics or decision has a non-trivial answer?
It seems like we could reasonably have a utility function that assigns more or less value to certain actions depending on things we can’t causally interact with. E.g. a small risk of wiping out all humanity within our future light cone would, I think, be less of a negative if I knew there was a human colony in a causally disconnected region of the universe.
How much less? What’s the asymptote (of the ratio) as the number of human colony ships that have exited the light cone approach infinity?
ETA: Also, that scenario moved the goalposts again. The question was “Should we consider those hypothetical colonists opinions when deciding to risk destroying everything we can?”
I don’t have a ratio; it’s more that I attach an additional (fixed) premium to killing off the entire human race, on top of the ordinary level of disutility I assign to killing each individual human.
(nb I’m trying to phrase this in utilitarian terms but I don’t actually consider myself a utilitarian; my true position is more what seems to be described as deontological?)
So you attach some measure of utility to the statement ‘Humanity still exists’, and then attach a probability to humanity existing outside of your light cone based on the information available; if humanity is 99% likely to exist outside of the cone, then the additional disutility of wiping out the last human in your light cone is reduced by 99%?
And the disutility of genocide and mass slaughters short of extinction remain unchanged?
To your first question: Yes. If something has one of two characteristics, but no information that we can (even theoretically) acquire allows us to determine which of those is true, then it is not meaningful to care about which one is true. Dropping to the object-level, it would be contradictory to have a simulation which accepted as input ONLY a set of initial conditions, but developed sentient life that was aware of you.
To your second question: “star systems that have become causally disconnected from our own” are distinguishable from our own. I’ll answer the question “Should be necessarily be ambivalent about things which we cannot even theoretically interact with” as a general case.
Utilitarian: Yes. (It has no effect on us)
Consequentialist: Yes. (We have no effect on them)
Social Contract: Only if we don’t have a deal with them.
Deist: Only if God says so.
Naive: Yes; I can’t know what they are, so I can’t change my decisions based on them.
What theory of ethics or decision has a non-trivial answer?
It seems like we could reasonably have a utility function that assigns more or less value to certain actions depending on things we can’t causally interact with. E.g. a small risk of wiping out all humanity within our future light cone would, I think, be less of a negative if I knew there was a human colony in a causally disconnected region of the universe.
How much less? What’s the asymptote (of the ratio) as the number of human colony ships that have exited the light cone approach infinity?
ETA: Also, that scenario moved the goalposts again. The question was “Should we consider those hypothetical colonists opinions when deciding to risk destroying everything we can?”
I don’t have a ratio; it’s more that I attach an additional (fixed) premium to killing off the entire human race, on top of the ordinary level of disutility I assign to killing each individual human.
(nb I’m trying to phrase this in utilitarian terms but I don’t actually consider myself a utilitarian; my true position is more what seems to be described as deontological?)
So you attach some measure of utility to the statement ‘Humanity still exists’, and then attach a probability to humanity existing outside of your light cone based on the information available; if humanity is 99% likely to exist outside of the cone, then the additional disutility of wiping out the last human in your light cone is reduced by 99%?
And the disutility of genocide and mass slaughters short of extinction remain unchanged?
Yeah, that sounds like what I meant.