I don’t see much in the way of empirical claims here (these would require a hard definition of “suffering” and falsifiability to start with), so I guess I’m talking about counterintuitive normative claims.
I think the idea that we should reduce the chance of spreading extreme involuntary suffering throughout the universe is much less counterintuitive
The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.
we should probably spend significant time engaging with ideas that seem intuitively absurd
So how do you pick absurd ideas to engage with? There are a LOT of them.
I don’t see much in the way of empirical claims here (these would require a hard definition of “suffering” and falsifiability to start with), so I guess I’m talking about counterintuitive normative claims.
Fair point. This is one problem I have had with moral realist utilitarianism. Although I think it may still be the case that sentience and suffering are objective, just not (currently) measurable. Regardless, I don’t think the claim of net-suffering in nature is all that absurd.
The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.
The claim I made is that spreading non-human life throughout the galaxy constitutes an s-risk, i.e. it could drastically increase the total amount of suffering. Any plausible moral view would say that s-risks are generally bad things, but it is not necessarily the case that suffering can never be outweighed by positive value. E.g., if one is not something like a negative utilitarian, then it could still be permissible to spread non-human life throughout the galaxy, as long as you take action to ensure that the benefits outweigh the harms, however you want to define that. Perhaps genetically altering them to reduce infant mortality rates, or to reduce their capacity to experience suffering, having a singleton to prevent suffering re-emerging through Darwinian processes, etc.
So how do you pick absurd ideas to engage with? There are a LOT of them.
This is a hard problem in practice, and I don’t claim to know the solution. Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information. Then you would probably transition from an exploration stage to an exploitation state (see the “multi-armed bandit”).
Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information.
And does the exploration of the consequences of spreading non-human life throughout the galaxy qualify? Doesn’t look like that to me, seems like you’ll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...
Yes, I think it does because it’s a plausible scenario and most plausible (IMO) ethical views say that causing non-human suffering is bad. Further exploration of the probability of such scenarios could influence my EA cause priorities, donation targets, and/or general worldview of the future.
seems like you’ll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...
Those have very low prior probabilities and low decision-relevance to me.
I believe I already told you that I don’t consider “spreading wild animal suffering” to be absurd; it’s a plausible scenario. What may be intuitively absurd is the claim that “destroying nature is a good thing”—which is not necessarily the same as the claim that “spreading wild animal suffering to new realms is bad, or ought to be minimized”. (And there are possible interventions to reduce non-human suffering conditional on spreading non-human life. E.g. “value spreading” is often discussed in the EA community.)
Anyway, I’m done with this conversation for now as I believe other activities have higher EV.
I don’t see much in the way of empirical claims here (these would require a hard definition of “suffering” and falsifiability to start with), so I guess I’m talking about counterintuitive normative claims.
The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.
So how do you pick absurd ideas to engage with? There are a LOT of them.
Fair point. This is one problem I have had with moral realist utilitarianism. Although I think it may still be the case that sentience and suffering are objective, just not (currently) measurable. Regardless, I don’t think the claim of net-suffering in nature is all that absurd.
The claim I made is that spreading non-human life throughout the galaxy constitutes an s-risk, i.e. it could drastically increase the total amount of suffering. Any plausible moral view would say that s-risks are generally bad things, but it is not necessarily the case that suffering can never be outweighed by positive value. E.g., if one is not something like a negative utilitarian, then it could still be permissible to spread non-human life throughout the galaxy, as long as you take action to ensure that the benefits outweigh the harms, however you want to define that. Perhaps genetically altering them to reduce infant mortality rates, or to reduce their capacity to experience suffering, having a singleton to prevent suffering re-emerging through Darwinian processes, etc.
This is a hard problem in practice, and I don’t claim to know the solution. Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information. Then you would probably transition from an exploration stage to an exploitation state (see the “multi-armed bandit”).
And does the exploration of the consequences of spreading non-human life throughout the galaxy qualify? Doesn’t look like that to me, seems like you’ll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...
Yes, I think it does because it’s a plausible scenario and most plausible (IMO) ethical views say that causing non-human suffering is bad. Further exploration of the probability of such scenarios could influence my EA cause priorities, donation targets, and/or general worldview of the future.
Those have very low prior probabilities and low decision-relevance to me.
Aren’t we talking about picking which absurd ideas to engage with?
You are doing some motte and bailey juggling:
Motte: This is an absurd idea which we engage with because it’s worth engaging with absurd ideas.
Bailey: This is an important plausible scenario which we need to be concerned about.
I believe I already told you that I don’t consider “spreading wild animal suffering” to be absurd; it’s a plausible scenario. What may be intuitively absurd is the claim that “destroying nature is a good thing”—which is not necessarily the same as the claim that “spreading wild animal suffering to new realms is bad, or ought to be minimized”. (And there are possible interventions to reduce non-human suffering conditional on spreading non-human life. E.g. “value spreading” is often discussed in the EA community.)
Anyway, I’m done with this conversation for now as I believe other activities have higher EV.