i’m generally very anti-genocide as well, and i expect the situations where it is the least bad way to implement my values to be rare. nonetheless, there are some situations where it feels like every alternative is worse. for example, imagine an individual (or population of individuals) who strongly desires to be strongly tortured, such that both letting them be strongly tortured or letting them go without torture would be highly unethical — both would constitute a form suffering above a threshold we’d be okay with — and of course, also imagine that that person strongly disvalues being modified to want other things, etc. in this situation, it seems like they simply cannot be instantiated in an ethical manner.
suffering is inefficient and would waste energy
that is true and it is why i expect such situations to be relatively rare, but they’re not impossible. there are numerous historical instances of human societies running huge amounts of suffering even when it’s not efficient, because there are many nash equilibrias in local maximums; and it only takes inventing superintelligent singleton to crystallize a set of values forever, even if they include suffering.
they may not take others’ soul-structure
there’s an issue here: what does “other” mean? can i sign up to be tortured for a 1000 years without ability to opt back out, or modifying my future self such that i’d be unable to concieve or desire to opt out? i don’t think so, because i think that’s an unreasonable amount of control for me to have over my future selves. for shorter spans of time, it’s reasonabler — notably because my timeselves have enough mutual respect to respect and implement each other’s values, to an extent. but a society’s consensus shouldn’t get to decide for all of its individuals (like the baby eaters’ children in https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8), and i don’t think an instant-individual should get to decide arbitrarily much for arbitrarily much of its future selves. there exists a threshold of suffering at which we ought to step in and stop it.
in your sense, your perspective seems to be denying the possibility of S-risks — situations that are defined to be worse than death. you seem to think that no such situation can occur such that death would be preferable, that continued life is always preferable even if it’s full of suffering. i’m not quite sure you think this, but it seems to be what is entailed by the perspective you present.
any attempt to reduce suffering by ending a life when that life would have continued to try to survive
any ? i don’t think so at all! again, i would strongly hope that if a future me is stuck constantly trying to pursue torture, a safe AI would come and terminate that future me rather than let me experience suffering forever just because my mind is stuck in a bad loop or something like that.
but you have no right to impose your hedonic utility function on another agent. claim: preference utilitarianism iterated through coprotection/mutual-aid-and-defence games is how we got morality in the first place.
to be clear, the reason i say “suffering” and not “pain” is to use a relatively high-level/abstracted notion of “things that are bad”. given that my utilitarian preferences are probly not lexicographic, even though my valuing of self-determination is very high, there could be situations where the suffering is bad enough that my wish to terminate suffering overrides my wish to ensure self-determination. ultimately, i’ll probly bite the bullet that i intend to do good, not just do good where i “have the right” to do that — and it happens that my doing good is trying to give as self-determination to as many moral patients as i can (https://carado.moe/∀V.html), but sometimes that’s just not ethically viable.
claim: preference utilitarianism iterated through coprotection/mutual-aid-and-defence games is how we got morality in the first place.
hm. i’m not sure about this, but even if it were the case, i don’t think it would make much of a difference — i want what i want, not what is the historical thing that has caused me to want what i want. but at least in the liberal west it seems that some form of preference utilitariansm is a fairly strong foundation, sure.
in comparison a civilization living as an HEC is, worst case, relatively trivial negentropy waste.
again, this seems to be a crux here. i can think of many shapes of socities that would be horrible to exist, way worse than just “negentropy waste”. just like good worlds where we spend energy on nice things, bad worlds where we spend energy on suffering can exist.
sorry if i appear to repeat myself a bunch in this response, i want to try responding to many of the points you bring up so that we can better locate the core of our disagreement. i want to clarify that i’m not super solid on my ethical beliefs — i’m defending them not just because they’re what i believe to be right but i want to see if they hold up and/or if there are better alternatives. it’s just that “let horrible hellworlds run” / “hellworlds just wouldn’t happen” (the strawman of what your position looks like to me) does not appear to me to be that.
I hold the belief that it is not possible to instantiate suffering worse than a star. this may be important for understanding how I think about what suffering is—I see it at fundamentally defined by wasted motion. it is not possible to exceed negentropy waste by creating suffering because that’s already what suffering is. I strongly agree with almost all your points, from the sound of things—I’ve gotten into several discussions the last couple days about this same topic, preserving agency even when that agency wants to maximize control-failure.
in terms of self-consistency, again it sounds like we basically agree—there are situations where one is obligated to intervene to check that the agents in a system are all being respected appropriately by other agents in a system.
my core claim is, if an agent-shard really values energy waste, well, that’s really foolish of them, but because the ratio of beings who want that can be trusted to be very low, all one need do is ensure agency of all agent-shards is respected, and suffering-avoidance falls out of it automatically (because suffering is inherently failure of agency-shards to reach their target, in absolutely all cases).
this seems like a very weird model to me. can you clarify what you mean by “suffering” ? whether or not you call it “suffering”, there is way worse stuff than a star. for example, a star’s worth of energy spent running variations of the holocaust is way worse than a star just doing combustion. the holocaust has a lot of suffering; a simple star probly barely has any random moral patients arising and experiencing anything.
here are some examples from me: “suffering” contains things like undergoing depression or torture, “nice things” contains things like “enjoying hugging a friend” or “enjoying having an insight”. both “consume energy” that could’ve not been spent — but isn’t the whole point of that we need to defeat moloch in order to have enough slack to have nice things, and also be sure that we don’t spend our slack printing suffering?
ah crap (approving), you found a serious error in how I’ve been summarizing my thinking on this. Whoops, and thank you!
Hmm. I actually don’t know that I can update my english to a new integrated view that responds to this point without thinking about it for a few days, so I’m going to have to get back to you. I expect to argue to that 1. yep, your counterexample holds—some information causes more suffering-for-an-agent if that information is lost into entropy than other information 2. I still feel comfortable asserting that we are all subagents of the universe, and that stars cannot be reasonably claimed to not be suffering; suffering is, in my view, an amount-of-damage-induced-to-an-agent’s-intent, and stars are necessarily damage induced to agentic intent because they are waste.
again, I do feel comfortable asserting that suffering must be waste and waste must be suffering, but it seems I need to nail down the weighting math and justification a bit better if it is to be useful to others.
Indeed, my current hunch is that I’m looking for amount of satisfying energy burn vs amount of frustrating energy burn, and my assertion that stars are necessarily almost entirely frustrating energy burn still seems likely to be defensible after further thought.
no, I consider any group of particles that have any interaction with each other to contain the nonplanning preferences of the laws of physics, and agency can arise any time a group of particles can predict another group of particles and seek to spread their intent into the receiving particles. not quite panpsychist—inert matter does not contain agency. but I do view agency as a continuous value, not a discrete one.
i’m generally very anti-genocide as well, and i expect the situations where it is the least bad way to implement my values to be rare. nonetheless, there are some situations where it feels like every alternative is worse. for example, imagine an individual (or population of individuals) who strongly desires to be strongly tortured, such that both letting them be strongly tortured or letting them go without torture would be highly unethical — both would constitute a form suffering above a threshold we’d be okay with — and of course, also imagine that that person strongly disvalues being modified to want other things, etc. in this situation, it seems like they simply cannot be instantiated in an ethical manner.
that is true and it is why i expect such situations to be relatively rare, but they’re not impossible. there are numerous historical instances of human societies running huge amounts of suffering even when it’s not efficient, because there are many nash equilibrias in local maximums; and it only takes inventing superintelligent singleton to crystallize a set of values forever, even if they include suffering.
there’s an issue here: what does “other” mean? can i sign up to be tortured for a 1000 years without ability to opt back out, or modifying my future self such that i’d be unable to concieve or desire to opt out? i don’t think so, because i think that’s an unreasonable amount of control for me to have over my future selves. for shorter spans of time, it’s reasonabler — notably because my timeselves have enough mutual respect to respect and implement each other’s values, to an extent. but a society’s consensus shouldn’t get to decide for all of its individuals (like the baby eaters’ children in https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8), and i don’t think an instant-individual should get to decide arbitrarily much for arbitrarily much of its future selves. there exists a threshold of suffering at which we ought to step in and stop it.
in your sense, your perspective seems to be denying the possibility of S-risks — situations that are defined to be worse than death. you seem to think that no such situation can occur such that death would be preferable, that continued life is always preferable even if it’s full of suffering. i’m not quite sure you think this, but it seems to be what is entailed by the perspective you present.
any ? i don’t think so at all! again, i would strongly hope that if a future me is stuck constantly trying to pursue torture, a safe AI would come and terminate that future me rather than let me experience suffering forever just because my mind is stuck in a bad loop or something like that.
to be clear, the reason i say “suffering” and not “pain” is to use a relatively high-level/abstracted notion of “things that are bad”. given that my utilitarian preferences are probly not lexicographic, even though my valuing of self-determination is very high, there could be situations where the suffering is bad enough that my wish to terminate suffering overrides my wish to ensure self-determination. ultimately, i’ll probly bite the bullet that i intend to do good, not just do good where i “have the right” to do that — and it happens that my doing good is trying to give as self-determination to as many moral patients as i can (https://carado.moe/∀V.html), but sometimes that’s just not ethically viable.
hm. i’m not sure about this, but even if it were the case, i don’t think it would make much of a difference — i want what i want, not what is the historical thing that has caused me to want what i want. but at least in the liberal west it seems that some form of preference utilitariansm is a fairly strong foundation, sure.
again, this seems to be a crux here. i can think of many shapes of socities that would be horrible to exist, way worse than just “negentropy waste”. just like good worlds where we spend energy on nice things, bad worlds where we spend energy on suffering can exist.
sorry if i appear to repeat myself a bunch in this response, i want to try responding to many of the points you bring up so that we can better locate the core of our disagreement. i want to clarify that i’m not super solid on my ethical beliefs — i’m defending them not just because they’re what i believe to be right but i want to see if they hold up and/or if there are better alternatives. it’s just that “let horrible hellworlds run” / “hellworlds just wouldn’t happen” (the strawman of what your position looks like to me) does not appear to me to be that.
I hold the belief that it is not possible to instantiate suffering worse than a star. this may be important for understanding how I think about what suffering is—I see it at fundamentally defined by wasted motion. it is not possible to exceed negentropy waste by creating suffering because that’s already what suffering is. I strongly agree with almost all your points, from the sound of things—I’ve gotten into several discussions the last couple days about this same topic, preserving agency even when that agency wants to maximize control-failure.
in terms of self-consistency, again it sounds like we basically agree—there are situations where one is obligated to intervene to check that the agents in a system are all being respected appropriately by other agents in a system.
my core claim is, if an agent-shard really values energy waste, well, that’s really foolish of them, but because the ratio of beings who want that can be trusted to be very low, all one need do is ensure agency of all agent-shards is respected, and suffering-avoidance falls out of it automatically (because suffering is inherently failure of agency-shards to reach their target, in absolutely all cases).
this seems like a very weird model to me. can you clarify what you mean by “suffering” ? whether or not you call it “suffering”, there is way worse stuff than a star. for example, a star’s worth of energy spent running variations of the holocaust is way worse than a star just doing combustion. the holocaust has a lot of suffering; a simple star probly barely has any random moral patients arising and experiencing anything.
here are some examples from me: “suffering” contains things like undergoing depression or torture, “nice things” contains things like “enjoying hugging a friend” or “enjoying having an insight”. both “consume energy” that could’ve not been spent — but isn’t the whole point of that we need to defeat moloch in order to have enough slack to have nice things, and also be sure that we don’t spend our slack printing suffering?
ah crap (approving), you found a serious error in how I’ve been summarizing my thinking on this. Whoops, and thank you!
Hmm. I actually don’t know that I can update my english to a new integrated view that responds to this point without thinking about it for a few days, so I’m going to have to get back to you. I expect to argue to that 1. yep, your counterexample holds—some information causes more suffering-for-an-agent if that information is lost into entropy than other information 2. I still feel comfortable asserting that we are all subagents of the universe, and that stars cannot be reasonably claimed to not be suffering; suffering is, in my view, an amount-of-damage-induced-to-an-agent’s-intent, and stars are necessarily damage induced to agentic intent because they are waste.
again, I do feel comfortable asserting that suffering must be waste and waste must be suffering, but it seems I need to nail down the weighting math and justification a bit better if it is to be useful to others.
Indeed, my current hunch is that I’m looking for amount of satisfying energy burn vs amount of frustrating energy burn, and my assertion that stars are necessarily almost entirely frustrating energy burn still seems likely to be defensible after further thought.
Are you an illusionist about first person experience? Your concept of suffering doesn’t seem to have any experiential qualities to it at all.
no, I consider any group of particles that have any interaction with each other to contain the nonplanning preferences of the laws of physics, and agency can arise any time a group of particles can predict another group of particles and seek to spread their intent into the receiving particles. not quite panpsychist—inert matter does not contain agency. but I do view agency as a continuous value, not a discrete one.