Yes, I’m using “natural lifespan” here as a placeholder for “the typical lifespan assuming nothing is actively trying to kill you.” It’s not great language, but I don’t think it’s obviously tautological.
The shark’s “natural” lifespan requires that it eats other creatures. Their “natural” lifespan requires that it does not.
Yes. My question is whether that’s a system that works for us.
We can say, “Evil sharks!” but I don’t feel any need to either exterminate all predators from the world, nor to modify them to graze on kelp. Yes, there’s a monumental amount of animal suffering in the ordinary course of things, even apart from humans. Maybe there wouldn’t be in a system designed by far future humans from scratch. But radically changing the one we live in when we hardly know how it all works—witness the quoted results of overfishing shark—strikes me as quixotic folly.
It strikes me as folly, too. But “Let’s go kill the sharks, then!” does not necessarily follow from “Predation is not anywhere close to optimal.” Nowhere have I (or anyone else here, unless I’m mistaken) argued that we should play with massive ecosystems now.
I’m very curious why you don’t feel any need to exterminate or modify predators, assuming it’s likely to be something we can do in the future with some degree of caution and precision.
I’m very curious why you don’t feel any need to exterminate or modify predators, assuming it’s likely to be something we can do in the future with some degree of caution and precision.
That sort of intervention is too far in the future for me to consider it worth thinking about. People of the future can take care of it then. That applies even if I’m one of those people of the far future (not that I expect to be). Future-me can deal with it, present-me doesn’t care or need to care what future-me decides.
In contrast, smallpox, tuberculosis, cholera, and the like are worth exterminating now, because (a) unlike the beautiful big fierce animals, they’re no loss in themselves, (b) it doesn’t appear that their loss will disrupt any ecosystems we want to keep, and (c) we actually can do it here and now.
There’s something about this sort of philosophy that I’ve wondered about for a while.
Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid?
That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings?
And more concretely: in a “we are now omnipotent gods” scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts’ content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so?
Or would we judge the sharks’ pleasure from eating fish to be an invalid value, and simply modify them to not be predators?
The shark question is perhaps a bit esoteric; but if we substitute “psychopaths” or “serial killers” for “sharks”, it might well become relevant at some future date.
I’m not sure what you mean by “valid” here—could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
I’m not sure what you mean by “valid” here—could you clarify?
Sure. By “valid” I mean something like “worth preserving”, or “to be endorsed as a part of the complex set of values that make up human-values-in-general”.
In other words, in the scenario where we’re effectively omnipotent (for this purpose, at least), and have decided that we’re going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: “we’ll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don’t find their values to be worth satisfying, so they’re going to be excluded from this”?
I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let’s also satisfy the values of all the paperclip maximizers. We don’t find paperclip maximization to be a valid value, in that sense.
So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy’s values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?
I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal.
Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.
However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
Well, sure. But let’s keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.
Yes, I’m using “natural lifespan” here as a placeholder for “the typical lifespan assuming nothing is actively trying to kill you.” It’s not great language, but I don’t think it’s obviously tautological.
Yes. My question is whether that’s a system that works for us.
We can say, “Evil sharks!” but I don’t feel any need to either exterminate all predators from the world, nor to modify them to graze on kelp. Yes, there’s a monumental amount of animal suffering in the ordinary course of things, even apart from humans. Maybe there wouldn’t be in a system designed by far future humans from scratch. But radically changing the one we live in when we hardly know how it all works—witness the quoted results of overfishing shark—strikes me as quixotic folly.
It strikes me as folly, too. But “Let’s go kill the sharks, then!” does not necessarily follow from “Predation is not anywhere close to optimal.” Nowhere have I (or anyone else here, unless I’m mistaken) argued that we should play with massive ecosystems now.
I’m very curious why you don’t feel any need to exterminate or modify predators, assuming it’s likely to be something we can do in the future with some degree of caution and precision.
That sort of intervention is too far in the future for me to consider it worth thinking about. People of the future can take care of it then. That applies even if I’m one of those people of the far future (not that I expect to be). Future-me can deal with it, present-me doesn’t care or need to care what future-me decides.
In contrast, smallpox, tuberculosis, cholera, and the like are worth exterminating now, because (a) unlike the beautiful big fierce animals, they’re no loss in themselves, (b) it doesn’t appear that their loss will disrupt any ecosystems we want to keep, and (c) we actually can do it here and now.
There’s something about this sort of philosophy that I’ve wondered about for a while.
Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid?
That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings?
And more concretely: in a “we are now omnipotent gods” scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts’ content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so?
Or would we judge the sharks’ pleasure from eating fish to be an invalid value, and simply modify them to not be predators?
The shark question is perhaps a bit esoteric; but if we substitute “psychopaths” or “serial killers” for “sharks”, it might well become relevant at some future date.
I’m not sure what you mean by “valid” here—could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
Sure. By “valid” I mean something like “worth preserving”, or “to be endorsed as a part of the complex set of values that make up human-values-in-general”.
In other words, in the scenario where we’re effectively omnipotent (for this purpose, at least), and have decided that we’re going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: “we’ll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don’t find their values to be worth satisfying, so they’re going to be excluded from this”?
I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let’s also satisfy the values of all the paperclip maximizers. We don’t find paperclip maximization to be a valid value, in that sense.
So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy’s values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?
Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.
Well, sure. But let’s keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.