No, it doesn’t necessarily imply that. Suppose wild animals have net-positive aggregate welfare, but a subset of these lives contain extreme involuntary suffering. Spreading this throughout the universe would still be considered an s-risk according to FRI’s definition: “Finally, some futures may contain both vast amounts of happiness and vast amounts of suffering, which constitutes an s-risk but not necessarily a (severe) x-risk. For instance, an event leading to a future containing 10^35 happy individuals and 10^25 unhappy ones, would constitute an s-risk, but not an “x-risk”.”
It may actually be the case that wild animals have net-negative welfare. The economist Yew-Kwang Ng has argued for this position. Brian Tomasik takes a similar view, and even endorses your attempted reductio (Edit: Ng has explicitly rejected it at this point). Michael Plant has written several counter-arguments to the Ng/Tomasik view. There doesn’t seem to be any way to resolve this at present. There may also be other ways to reduce wild animal suffering besides destroying nature (e.g., see Pearce’s abolitionist project).
If the suffering of hypothetical entities is morally relevant, then Brian Tomasik’s electron thought experiment was a crime of unimaginable proportions. In fact, it may well be that Tomasiks spontaneously forming in empty space outweigh every “conventional” source of suffering in the Universe. I call this the Boltzmann Brian problem.
Are you referring to empirical or normative claims? I don’t consider the idea that wild animals experience net suffering absurd, although the idea that habitat destruction is morally beneficial is counterintuitive to most people. I think the idea that we should reduce the chance of spreading extreme involuntary suffering, including wild-animal suffering, throughout the universe is much less counterintuitive, and is consistent with a wide range of moral views.
Since I give significant (but not 100%) weight to “the overwhelming importance of the far future” (Nick Beckstead), and the future is always absurd, we should probably spend significant time engaging with ideas that seem intuitively absurd. I don’t think opposition to spreading wild-animal suffering is one of these, although things like suffering subroutines and some of the ideas mentioned in the OP (e.g., quantum immortality, multiverses) might be. Some people consider the intelligence explosion absurd, but I still think it has some non-negligible plausibility.
I don’t see much in the way of empirical claims here (these would require a hard definition of “suffering” and falsifiability to start with), so I guess I’m talking about counterintuitive normative claims.
I think the idea that we should reduce the chance of spreading extreme involuntary suffering throughout the universe is much less counterintuitive
The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.
we should probably spend significant time engaging with ideas that seem intuitively absurd
So how do you pick absurd ideas to engage with? There are a LOT of them.
I don’t see much in the way of empirical claims here (these would require a hard definition of “suffering” and falsifiability to start with), so I guess I’m talking about counterintuitive normative claims.
Fair point. This is one problem I have had with moral realist utilitarianism. Although I think it may still be the case that sentience and suffering are objective, just not (currently) measurable. Regardless, I don’t think the claim of net-suffering in nature is all that absurd.
The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.
The claim I made is that spreading non-human life throughout the galaxy constitutes an s-risk, i.e. it could drastically increase the total amount of suffering. Any plausible moral view would say that s-risks are generally bad things, but it is not necessarily the case that suffering can never be outweighed by positive value. E.g., if one is not something like a negative utilitarian, then it could still be permissible to spread non-human life throughout the galaxy, as long as you take action to ensure that the benefits outweigh the harms, however you want to define that. Perhaps genetically altering them to reduce infant mortality rates, or to reduce their capacity to experience suffering, having a singleton to prevent suffering re-emerging through Darwinian processes, etc.
So how do you pick absurd ideas to engage with? There are a LOT of them.
This is a hard problem in practice, and I don’t claim to know the solution. Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information. Then you would probably transition from an exploration stage to an exploitation state (see the “multi-armed bandit”).
Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information.
And does the exploration of the consequences of spreading non-human life throughout the galaxy qualify? Doesn’t look like that to me, seems like you’ll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...
Yes, I think it does because it’s a plausible scenario and most plausible (IMO) ethical views say that causing non-human suffering is bad. Further exploration of the probability of such scenarios could influence my EA cause priorities, donation targets, and/or general worldview of the future.
seems like you’ll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...
Those have very low prior probabilities and low decision-relevance to me.
I believe I already told you that I don’t consider “spreading wild animal suffering” to be absurd; it’s a plausible scenario. What may be intuitively absurd is the claim that “destroying nature is a good thing”—which is not necessarily the same as the claim that “spreading wild animal suffering to new realms is bad, or ought to be minimized”. (And there are possible interventions to reduce non-human suffering conditional on spreading non-human life. E.g. “value spreading” is often discussed in the EA community.)
Anyway, I’m done with this conversation for now as I believe other activities have higher EV.
No, it doesn’t necessarily imply that. Suppose wild animals have net-positive aggregate welfare, but a subset of these lives contain extreme involuntary suffering. Spreading this throughout the universe would still be considered an s-risk according to FRI’s definition: “Finally, some futures may contain both vast amounts of happiness and vast amounts of suffering, which constitutes an s-risk but not necessarily a (severe) x-risk. For instance, an event leading to a future containing 10^35 happy individuals and 10^25 unhappy ones, would constitute an s-risk, but not an “x-risk”.”
It may actually be the case that wild animals have net-negative welfare. The economist Yew-Kwang Ng has argued for this position. Brian Tomasik takes a similar view, and even endorses your attempted reductio (Edit: Ng has explicitly rejected it at this point). Michael Plant has written several counter-arguments to the Ng/Tomasik view. There doesn’t seem to be any way to resolve this at present. There may also be other ways to reduce wild animal suffering besides destroying nature (e.g., see Pearce’s abolitionist project).
You have two choices: ad absurdum and “Brian Tomasik takes a similar view, and even endorses”.
Pick one :-)
Someone once proposed a possible s-risk:
Well then, how much resources (e.g. time and mental energy) do you feel should we spend entertaining absurd (note: no quotes) notions?
Are you referring to empirical or normative claims? I don’t consider the idea that wild animals experience net suffering absurd, although the idea that habitat destruction is morally beneficial is counterintuitive to most people. I think the idea that we should reduce the chance of spreading extreme involuntary suffering, including wild-animal suffering, throughout the universe is much less counterintuitive, and is consistent with a wide range of moral views.
Since I give significant (but not 100%) weight to “the overwhelming importance of the far future” (Nick Beckstead), and the future is always absurd, we should probably spend significant time engaging with ideas that seem intuitively absurd. I don’t think opposition to spreading wild-animal suffering is one of these, although things like suffering subroutines and some of the ideas mentioned in the OP (e.g., quantum immortality, multiverses) might be. Some people consider the intelligence explosion absurd, but I still think it has some non-negligible plausibility.
I don’t see much in the way of empirical claims here (these would require a hard definition of “suffering” and falsifiability to start with), so I guess I’m talking about counterintuitive normative claims.
The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.
So how do you pick absurd ideas to engage with? There are a LOT of them.
Fair point. This is one problem I have had with moral realist utilitarianism. Although I think it may still be the case that sentience and suffering are objective, just not (currently) measurable. Regardless, I don’t think the claim of net-suffering in nature is all that absurd.
The claim I made is that spreading non-human life throughout the galaxy constitutes an s-risk, i.e. it could drastically increase the total amount of suffering. Any plausible moral view would say that s-risks are generally bad things, but it is not necessarily the case that suffering can never be outweighed by positive value. E.g., if one is not something like a negative utilitarian, then it could still be permissible to spread non-human life throughout the galaxy, as long as you take action to ensure that the benefits outweigh the harms, however you want to define that. Perhaps genetically altering them to reduce infant mortality rates, or to reduce their capacity to experience suffering, having a singleton to prevent suffering re-emerging through Darwinian processes, etc.
This is a hard problem in practice, and I don’t claim to know the solution. Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information. Then you would probably transition from an exploration stage to an exploitation state (see the “multi-armed bandit”).
And does the exploration of the consequences of spreading non-human life throughout the galaxy qualify? Doesn’t look like that to me, seems like you’ll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...
Yes, I think it does because it’s a plausible scenario and most plausible (IMO) ethical views say that causing non-human suffering is bad. Further exploration of the probability of such scenarios could influence my EA cause priorities, donation targets, and/or general worldview of the future.
Those have very low prior probabilities and low decision-relevance to me.
Aren’t we talking about picking which absurd ideas to engage with?
You are doing some motte and bailey juggling:
Motte: This is an absurd idea which we engage with because it’s worth engaging with absurd ideas.
Bailey: This is an important plausible scenario which we need to be concerned about.
I believe I already told you that I don’t consider “spreading wild animal suffering” to be absurd; it’s a plausible scenario. What may be intuitively absurd is the claim that “destroying nature is a good thing”—which is not necessarily the same as the claim that “spreading wild animal suffering to new realms is bad, or ought to be minimized”. (And there are possible interventions to reduce non-human suffering conditional on spreading non-human life. E.g. “value spreading” is often discussed in the EA community.)
Anyway, I’m done with this conversation for now as I believe other activities have higher EV.