Of course, there isn’t the aspect of “something directly values the pain of others”, but we are willing to hurt puppies if this helps human interests.
I mean, completely different thing though, right? Here with “puppies” I was mostly jokingly thinking of actual doggies (which in our culture we wouldn’t dream of eating, unlike calves, piglets and chicks, that are cute and all but actually fine for some reason, or rabbits, that are either pets or food depending on our eldritch whims). But I’d say there’s nothing weird about our actions if you consider that we just don’t consider animals actual moral subjects. We do with them as it pleases us. Sometimes it pleases us to care for and protect them, because the gods operate in mysterious ways. But nothing of what we do, not even animal abuse laws and all that stuff, actually corresponds to making them into full moral subjects, because it’s all still inconsistent as hell and always comes second to human welfare. I always found actually really stupid some ethical rules about e.g. mice treatment in medical research—pretty much, if anything may be causing them suffering, you’re supposed to euthanise them. But we haven’t exactly asked the mice if they prefer death to a bit of hardship. It’s just our own self-righteous attitude we take on their behalf, and nothing like what we would do with humans. If a potentially aggressive animal is close to a human and may endanger their life, they get put down, no matter how tiny the actual risk. A fraction of human life in expectation always outweighs one or more animal lives. To make animals true moral subjects by extending the circle of concern to them would make our life a whole lot harder, and thus, conveniently, we don’t actually do it. Not even the most extreme vegans fully commit to it, that I know of. If you’ve seen “The Good Place”, Doug Forcett is what a human who treats animals kind of like moral subjects looks like, and he holds a funeral wracked by guilt for having accidentally stomped on a snail.
Thus, anyone who creates entities with this property is probably actually optimizing for something else.
True, but what do we do with the now created utility monster? Euthanise it mercifully because it’s one against many? To do so already implies that we measure utility by something else than total sum.
my impression is that people mostly accept this in everyday examples
I don’t know. In a way, the stereotypical “Karen” is a utility monster: someone who weaponizes their aggrievement by claiming disproportionate suffering out of a relatively trivial inconvenience to try and guilt-trip others into moving mountains for their sake. And people don’t generally look too kindly on that. If anything, do it long enough, and many people will try to piss you off on purpose just out of spite.
I do tend to round things off to utilitarianism it seems.
Your point on the distinct categories between puppies and other animals is a good one. With the categorical distinction in place, our other actions aren’t really utilitarian trade-offs any more. But there are animals like guinea pigs which are in multiple categories.
what do we do with the now created utility monster?
I have trouble seriously imagining an utility monster which actually is net-positive from a total utility standpoint. In the hypothetical with the scientist, I would tend towards not letting the monster do harm just to remove incentives for dangerous research.
For the more general case, I would search for some excuses why I can be a good utilitarian while stopping the monster. And hope that I actually find a convincing argument.
Maybe I think that most good in the world needs strong cooperation which is undermined by the existence of utility monsters.
In a way, the stereotypical “Karen” is a utility monster
One complication here is that I would expect the stereotypical Karen to be mostly role-playing such that it would not actually be positive utility to follow her whims.
But then, there could still be a stereotypical Caren who actually has very strong emotions/qualia/the-thing-that-matters-for-utility. I have no idea how this would play out or how people would even get convinced that she is Caren and not Karen.
For the more general case, I would search for some excuses why I can be a good utilitarian while stopping the monster. And hope that I actually find a convincing argument. Maybe I think that most good in the world needs strong cooperation which is undermined by the existence of utility monsters.
I mean, if it’s about looking for post-hoc rationalizations, what’s even the point of pretending there’s a consistent ethical system? Might as well go “fuck the utility monster” and blast it to hell with no further justification than sheer human chauvinism. I think we need a bit of that in fact in the face of AI issues—some of the most extreme e/acc people seem indeed to think that an ASI would be such a utility monster and “deserves” to take over for… reasons, reasons that I personally don’t really give a toss about.
But then, there could still be a stereotypical Caren who actually has very strong emotions/qualia/the-thing-that-matters-for-utility.
Having no access to the internal experiences of anyone else, how do we even tell? With humans, we assume we can know because we assume they’re kinda like us. And we’re probably often wrong on that too! People seem to experience pain, for example, both physical and mental, on very different scales, both in terms of expression and of how it actually affects their functioning. Does this mean some people feel the same pain but can power through it, or does it mean they feel objectively less? Does the question even make sense? If you start involving non-human entities, we have essentially zero reference frame to judge. Outward behavior is all we have, and it’s not a lot to go by.
I mean, if it’s about looking for post-hoc rationalizations, what’s even the point of pretending there’s a consistent ethical system?
Hmm, I would not describe it as rationalization in the motivated reasoning sense.
My model of this process is that most of my ethical intuitions are mostly a black-box and often contradictory, but still in the end contain a lot more information about what I deem good than any of the explicit reasoning I am capable of. If however, I find an explicit model which manages to explain my intuitions sufficiently well, I am willing to update or override my intuitions.
I would in the end accept an argument that goes against some of my intuitions if it is strong enough. But I will also strive to find a theory which manages to combine all the intuitions into a functioning whole.
In this case, I have an intuition towards negative utilitarianism, which really dislikes utility monsters, but I also have noticed the tendency that I land closer to symmetric utilitarianism when I use explicit reasoning. Due to this, the likely options are that after further reflection I
would be convinced that utility monsters are fine, actually.
would come to believe that there are strong utilitarian arguments to have a policy against utility monsters such that in practice they would almost always be bad
would shift in some other direction
and my intuition for negative utilitarianism would prefer cases 2 or 3.
So the above description was what was going on in my mind, and combined with the always-present possibility that I am bullshitting myself, led to the formulation I used :)
I mean, completely different thing though, right? Here with “puppies” I was mostly jokingly thinking of actual doggies (which in our culture we wouldn’t dream of eating, unlike calves, piglets and chicks, that are cute and all but actually fine for some reason, or rabbits, that are either pets or food depending on our eldritch whims). But I’d say there’s nothing weird about our actions if you consider that we just don’t consider animals actual moral subjects. We do with them as it pleases us. Sometimes it pleases us to care for and protect them, because the gods operate in mysterious ways. But nothing of what we do, not even animal abuse laws and all that stuff, actually corresponds to making them into full moral subjects, because it’s all still inconsistent as hell and always comes second to human welfare. I always found actually really stupid some ethical rules about e.g. mice treatment in medical research—pretty much, if anything may be causing them suffering, you’re supposed to euthanise them. But we haven’t exactly asked the mice if they prefer death to a bit of hardship. It’s just our own self-righteous attitude we take on their behalf, and nothing like what we would do with humans. If a potentially aggressive animal is close to a human and may endanger their life, they get put down, no matter how tiny the actual risk. A fraction of human life in expectation always outweighs one or more animal lives. To make animals true moral subjects by extending the circle of concern to them would make our life a whole lot harder, and thus, conveniently, we don’t actually do it. Not even the most extreme vegans fully commit to it, that I know of. If you’ve seen “The Good Place”, Doug Forcett is what a human who treats animals kind of like moral subjects looks like, and he holds a funeral wracked by guilt for having accidentally stomped on a snail.
True, but what do we do with the now created utility monster? Euthanise it mercifully because it’s one against many? To do so already implies that we measure utility by something else than total sum.
I don’t know. In a way, the stereotypical “Karen” is a utility monster: someone who weaponizes their aggrievement by claiming disproportionate suffering out of a relatively trivial inconvenience to try and guilt-trip others into moving mountains for their sake. And people don’t generally look too kindly on that. If anything, do it long enough, and many people will try to piss you off on purpose just out of spite.
I do tend to round things off to utilitarianism it seems.
Your point on the distinct categories between puppies and other animals is a good one. With the categorical distinction in place, our other actions aren’t really utilitarian trade-offs any more. But there are animals like guinea pigs which are in multiple categories.
I have trouble seriously imagining an utility monster which actually is net-positive from a total utility standpoint. In the hypothetical with the scientist, I would tend towards not letting the monster do harm just to remove incentives for dangerous research. For the more general case, I would search for some excuses why I can be a good utilitarian while stopping the monster. And hope that I actually find a convincing argument. Maybe I think that most good in the world needs strong cooperation which is undermined by the existence of utility monsters.
One complication here is that I would expect the stereotypical Karen to be mostly role-playing such that it would not actually be positive utility to follow her whims. But then, there could still be a stereotypical Caren who actually has very strong emotions/qualia/the-thing-that-matters-for-utility. I have no idea how this would play out or how people would even get convinced that she is Caren and not Karen.
I mean, if it’s about looking for post-hoc rationalizations, what’s even the point of pretending there’s a consistent ethical system? Might as well go “fuck the utility monster” and blast it to hell with no further justification than sheer human chauvinism. I think we need a bit of that in fact in the face of AI issues—some of the most extreme e/acc people seem indeed to think that an ASI would be such a utility monster and “deserves” to take over for… reasons, reasons that I personally don’t really give a toss about.
Having no access to the internal experiences of anyone else, how do we even tell? With humans, we assume we can know because we assume they’re kinda like us. And we’re probably often wrong on that too! People seem to experience pain, for example, both physical and mental, on very different scales, both in terms of expression and of how it actually affects their functioning. Does this mean some people feel the same pain but can power through it, or does it mean they feel objectively less? Does the question even make sense? If you start involving non-human entities, we have essentially zero reference frame to judge. Outward behavior is all we have, and it’s not a lot to go by.
Hmm, I would not describe it as rationalization in the motivated reasoning sense.
My model of this process is that most of my ethical intuitions are mostly a black-box and often contradictory, but still in the end contain a lot more information about what I deem good than any of the explicit reasoning I am capable of. If however, I find an explicit model which manages to explain my intuitions sufficiently well, I am willing to update or override my intuitions. I would in the end accept an argument that goes against some of my intuitions if it is strong enough. But I will also strive to find a theory which manages to combine all the intuitions into a functioning whole.
In this case, I have an intuition towards negative utilitarianism, which really dislikes utility monsters, but I also have noticed the tendency that I land closer to symmetric utilitarianism when I use explicit reasoning. Due to this, the likely options are that after further reflection I
would be convinced that utility monsters are fine, actually.
would come to believe that there are strong utilitarian arguments to have a policy against utility monsters such that in practice they would almost always be bad
would shift in some other direction
and my intuition for negative utilitarianism would prefer cases 2 or 3.
So the above description was what was going on in my mind, and combined with the always-present possibility that I am bullshitting myself, led to the formulation I used :)