This thought experiment really gets to a core disagreement I have with this form of ethics. I can’t really formulate a justification for my view, but I have a reaction that I’ll call “cosmic libertarianism”. It seems to me that the only logical way in which this HEC civilization can come to exist is as some sort of defense against an adversary, and that the civ is essentially turtling. (They might be defending against those who are quick to exterminate civilizations that don’t meet some standards of pleasure/pain balance.)
It also seems to me that if civilizations or the beings within them have any fundamental rights, they should have the right to go about their business. (The only exception would be a state of war.) If we were able to communicate with the HEC aliens, then we could get their consent to do… whatever. But otherwise they should be left alone.
i tend to be a fan of “cosmic libertarianism” — see my attempt at something like that. it’s just that, as i explain in an answer i’ve given to another comment, there’s a big difference between trading a lot of suffering for self-determination, and trading arbitrarily much suffering for self-determination. i’m not willing to do the latter — there does seem to be potential amounts of suffering that are so bad that overriding self-determination is worth it.
while i hold this even for individuals, holding this for societies is way easier: a society that unconsentingly oppresses some of its people seems like a clear case for overriding the “society’s overall self-determination” for the sake of individual rights. this can be extended to override an individual’s self-determination over themself for example by saying that they can’t commit their future selves to undergoing arbitrarily much suffering for arbitrarily long.
Do you believe that the pleasure/pain balance is an invalid reason for violently intervening in an alien civilization’s affairs? Is this true by principle, or is it simply the case that such interventions will make the world worse off in the long run?
I would take it on a case by case basis. If we know for sure that an alien civilization is creating an enormous amount of suffering for no good reason (eg for sadistic pleasure), then intervening is warranted. But we should acknowledge this is equivalent to declaring war on the civ, even if the state of war is short period of time (due to a massive power differential). We should not go to war if there is possibility of negotiation.
Consider the following thought experiment. It’s the far future and physics has settled on a consensus that black holes contain baby universes and that our universe is inside a black hole in a larger universe, which we’ll call the superverse. Also, we have the technology to destroy black holes. Some people argue that the black holes in our universe contain universes with massive amounts of suffering. We cannot know for sure what the pleasure/pain balance is in these baby universes, but we can guess, and many have come to the conclusion that a typical universe has massively more pain than pleasure. So we should destroy any and all black holes and their baby universes, to prevent suffering. (To simplify the moral calculus, we’ll assume that destroying black holes doesn’t give us valuable matter and energy. The thought experiment gets a lot more interesting if we relax this assumption, but principles remain.)
The problem here is that there is no room to live in this moral system. It’s an argument for the extinction of all life (except for life that is provably net-positive). The aliens that live in the superverse could just as well kill us since they have no way of knowing what the pleasure/pain balance is here in our universe. And I’m not just making an argument from acausal trade with the superverse. I do think it is in principle wrong to destroy a life on an unprovable assumption that most life is net-negative. I also don’t think that pleasure and pain alone should be the moral calculus. In my view, all life has a fundamental beauty and that beauty should not be snuffed out in pursuit of more hedons.
My ethics are pragmatic: my view is shaped by the observation that utilitarianism seems obviously unworkable in the context of AI alignment. I don’t think alignment is solvable if we insist on building strong reinforment-learning-style agents and then try to teach then utilitarianism. I think we need non-utilitarian agents that are corrigible and perhaps have a concept of fundamental rights. What this looks like is: the robot doesn’t kill the suffering human, because the suffering human states that she wants to live, and the robot is programmed to prioritize her right to life (to consent to euthanasia) over some terminal goal of snuffing out pain. AI must be aligned to this set of values in order for humans to survive.
i could see myself biting the bullet that we should probly extinguish black holes whose contents we can’t otherwise ensure the ethicality of. not based on pain/pleasure alone, but based on whatever it is that my general high-level notions of “suffering” and “self-determination” and whatever else actually mean.
This thought experiment really gets to a core disagreement I have with this form of ethics. I can’t really formulate a justification for my view, but I have a reaction that I’ll call “cosmic libertarianism”. It seems to me that the only logical way in which this HEC civilization can come to exist is as some sort of defense against an adversary, and that the civ is essentially turtling. (They might be defending against those who are quick to exterminate civilizations that don’t meet some standards of pleasure/pain balance.)
It also seems to me that if civilizations or the beings within them have any fundamental rights, they should have the right to go about their business. (The only exception would be a state of war.) If we were able to communicate with the HEC aliens, then we could get their consent to do… whatever. But otherwise they should be left alone.
i tend to be a fan of “cosmic libertarianism” — see my attempt at something like that. it’s just that, as i explain in an answer i’ve given to another comment, there’s a big difference between trading a lot of suffering for self-determination, and trading arbitrarily much suffering for self-determination. i’m not willing to do the latter — there does seem to be potential amounts of suffering that are so bad that overriding self-determination is worth it.
while i hold this even for individuals, holding this for societies is way easier: a society that unconsentingly oppresses some of its people seems like a clear case for overriding the “society’s overall self-determination” for the sake of individual rights. this can be extended to override an individual’s self-determination over themself for example by saying that they can’t commit their future selves to undergoing arbitrarily much suffering for arbitrarily long.
Do you believe that the pleasure/pain balance is an invalid reason for violently intervening in an alien civilization’s affairs? Is this true by principle, or is it simply the case that such interventions will make the world worse off in the long run?
I would take it on a case by case basis. If we know for sure that an alien civilization is creating an enormous amount of suffering for no good reason (eg for sadistic pleasure), then intervening is warranted. But we should acknowledge this is equivalent to declaring war on the civ, even if the state of war is short period of time (due to a massive power differential). We should not go to war if there is possibility of negotiation.
Consider the following thought experiment. It’s the far future and physics has settled on a consensus that black holes contain baby universes and that our universe is inside a black hole in a larger universe, which we’ll call the superverse. Also, we have the technology to destroy black holes. Some people argue that the black holes in our universe contain universes with massive amounts of suffering. We cannot know for sure what the pleasure/pain balance is in these baby universes, but we can guess, and many have come to the conclusion that a typical universe has massively more pain than pleasure. So we should destroy any and all black holes and their baby universes, to prevent suffering. (To simplify the moral calculus, we’ll assume that destroying black holes doesn’t give us valuable matter and energy. The thought experiment gets a lot more interesting if we relax this assumption, but principles remain.)
The problem here is that there is no room to live in this moral system. It’s an argument for the extinction of all life (except for life that is provably net-positive). The aliens that live in the superverse could just as well kill us since they have no way of knowing what the pleasure/pain balance is here in our universe. And I’m not just making an argument from acausal trade with the superverse. I do think it is in principle wrong to destroy a life on an unprovable assumption that most life is net-negative. I also don’t think that pleasure and pain alone should be the moral calculus. In my view, all life has a fundamental beauty and that beauty should not be snuffed out in pursuit of more hedons.
My ethics are pragmatic: my view is shaped by the observation that utilitarianism seems obviously unworkable in the context of AI alignment. I don’t think alignment is solvable if we insist on building strong reinforment-learning-style agents and then try to teach then utilitarianism. I think we need non-utilitarian agents that are corrigible and perhaps have a concept of fundamental rights. What this looks like is: the robot doesn’t kill the suffering human, because the suffering human states that she wants to live, and the robot is programmed to prioritize her right to life (to consent to euthanasia) over some terminal goal of snuffing out pain. AI must be aligned to this set of values in order for humans to survive.
i could see myself biting the bullet that we should probly extinguish black holes whose contents we can’t otherwise ensure the ethicality of. not based on pain/pleasure alone, but based on whatever it is that my general high-level notions of “suffering” and “self-determination” and whatever else actually mean.
To be honest, I just think that it’s insane and dangerous to not have incredibly high standards here. We are talking about genocide.