Do you believe that the pleasure/pain balance is an invalid reason for violently intervening in an alien civilization’s affairs? Is this true by principle, or is it simply the case that such interventions will make the world worse off in the long run?
I would take it on a case by case basis. If we know for sure that an alien civilization is creating an enormous amount of suffering for no good reason (eg for sadistic pleasure), then intervening is warranted. But we should acknowledge this is equivalent to declaring war on the civ, even if the state of war is short period of time (due to a massive power differential). We should not go to war if there is possibility of negotiation.
Consider the following thought experiment. It’s the far future and physics has settled on a consensus that black holes contain baby universes and that our universe is inside a black hole in a larger universe, which we’ll call the superverse. Also, we have the technology to destroy black holes. Some people argue that the black holes in our universe contain universes with massive amounts of suffering. We cannot know for sure what the pleasure/pain balance is in these baby universes, but we can guess, and many have come to the conclusion that a typical universe has massively more pain than pleasure. So we should destroy any and all black holes and their baby universes, to prevent suffering. (To simplify the moral calculus, we’ll assume that destroying black holes doesn’t give us valuable matter and energy. The thought experiment gets a lot more interesting if we relax this assumption, but principles remain.)
The problem here is that there is no room to live in this moral system. It’s an argument for the extinction of all life (except for life that is provably net-positive). The aliens that live in the superverse could just as well kill us since they have no way of knowing what the pleasure/pain balance is here in our universe. And I’m not just making an argument from acausal trade with the superverse. I do think it is in principle wrong to destroy a life on an unprovable assumption that most life is net-negative. I also don’t think that pleasure and pain alone should be the moral calculus. In my view, all life has a fundamental beauty and that beauty should not be snuffed out in pursuit of more hedons.
My ethics are pragmatic: my view is shaped by the observation that utilitarianism seems obviously unworkable in the context of AI alignment. I don’t think alignment is solvable if we insist on building strong reinforment-learning-style agents and then try to teach then utilitarianism. I think we need non-utilitarian agents that are corrigible and perhaps have a concept of fundamental rights. What this looks like is: the robot doesn’t kill the suffering human, because the suffering human states that she wants to live, and the robot is programmed to prioritize her right to life (to consent to euthanasia) over some terminal goal of snuffing out pain. AI must be aligned to this set of values in order for humans to survive.
i could see myself biting the bullet that we should probly extinguish black holes whose contents we can’t otherwise ensure the ethicality of. not based on pain/pleasure alone, but based on whatever it is that my general high-level notions of “suffering” and “self-determination” and whatever else actually mean.
Do you believe that the pleasure/pain balance is an invalid reason for violently intervening in an alien civilization’s affairs? Is this true by principle, or is it simply the case that such interventions will make the world worse off in the long run?
I would take it on a case by case basis. If we know for sure that an alien civilization is creating an enormous amount of suffering for no good reason (eg for sadistic pleasure), then intervening is warranted. But we should acknowledge this is equivalent to declaring war on the civ, even if the state of war is short period of time (due to a massive power differential). We should not go to war if there is possibility of negotiation.
Consider the following thought experiment. It’s the far future and physics has settled on a consensus that black holes contain baby universes and that our universe is inside a black hole in a larger universe, which we’ll call the superverse. Also, we have the technology to destroy black holes. Some people argue that the black holes in our universe contain universes with massive amounts of suffering. We cannot know for sure what the pleasure/pain balance is in these baby universes, but we can guess, and many have come to the conclusion that a typical universe has massively more pain than pleasure. So we should destroy any and all black holes and their baby universes, to prevent suffering. (To simplify the moral calculus, we’ll assume that destroying black holes doesn’t give us valuable matter and energy. The thought experiment gets a lot more interesting if we relax this assumption, but principles remain.)
The problem here is that there is no room to live in this moral system. It’s an argument for the extinction of all life (except for life that is provably net-positive). The aliens that live in the superverse could just as well kill us since they have no way of knowing what the pleasure/pain balance is here in our universe. And I’m not just making an argument from acausal trade with the superverse. I do think it is in principle wrong to destroy a life on an unprovable assumption that most life is net-negative. I also don’t think that pleasure and pain alone should be the moral calculus. In my view, all life has a fundamental beauty and that beauty should not be snuffed out in pursuit of more hedons.
My ethics are pragmatic: my view is shaped by the observation that utilitarianism seems obviously unworkable in the context of AI alignment. I don’t think alignment is solvable if we insist on building strong reinforment-learning-style agents and then try to teach then utilitarianism. I think we need non-utilitarian agents that are corrigible and perhaps have a concept of fundamental rights. What this looks like is: the robot doesn’t kill the suffering human, because the suffering human states that she wants to live, and the robot is programmed to prioritize her right to life (to consent to euthanasia) over some terminal goal of snuffing out pain. AI must be aligned to this set of values in order for humans to survive.
i could see myself biting the bullet that we should probly extinguish black holes whose contents we can’t otherwise ensure the ethicality of. not based on pain/pleasure alone, but based on whatever it is that my general high-level notions of “suffering” and “self-determination” and whatever else actually mean.
To be honest, I just think that it’s insane and dangerous to not have incredibly high standards here. We are talking about genocide.