We used Monte Carlo simulations to estimate, for various sentience models and across eighteen organisms, the distribution of plausible probabilities of sentience.
We used a similar simulation procedure to estimate the distribution of welfare ranges for eleven of these eighteen organisms, taking into account uncertainty in model choice, the presence of proxies relevant to welfare capacity, and the organisms’ probabilities of sentience (equating this probability with the probability of moral patienthood)
Now with the disclaimer that I do think that RP are doing good and important work and are one of the few organizations seriously thinking about animal welfare priorities...
Their epistemics led them to do a Monte Carlo simulation to determine if organisms are capable of suffering (and if so, how much) then got a value of 5 shrimp = 1 human and then not bat an eye at this number.
Neither a physicalist nor a functionalist theory of consciousness can reasonably justify a number like this. Shrimp have 5 orders of magnitude fewer neurons than humans, so whether suffering is the result of a physical process or an information processing one, this implies that shrimp neurons do 4 orders of magnitude more of this process per second than human neurons. The authors get around this by refusing to stake themselves on any theory of consciousness.
The overall structure of the RP welfare range report, does not cut to the truth, instead the core mental motion seems to be to engage with as many existing piece of work as possible; credence is doled out to different schools of thought and pieces of evidence in a way which seems more like appeasement, lip-service, or a “well these guys have done some work, who are we disrespect them by ignoring it” attitude. Removal of noise is one of the most important functions of meta-analysis, and it is largely absent.
The result of this is an epistemology where the accuracy of a piece of work is a monotonically increasing function of the number of sources, theories, and lines of argument. Which is fine if your desired output is a very long Google doc, and a disclaimer to yourself (and, more cynically, your funders) that “No no, we did everything right, we reviewed all the evidence and took it all into account.” but it’s pretty bad if you want to actually be correct.
I grow increasingly convinced that the epistemics of EA are not especially good, worsening, and already insufficient to work on the relatively low-stakes and easy issue of animal welfare (as compared to AI x-risk).
Their epistemics led them to do a Monte Carlo simulation to determine if organisms are capable of suffering (and if so, how much) then got a value of 5 shrimp = 1 human and then not bat an eye at this number.
Neither a physicalist nor a functionalist theory of consciousness can reasonably justify a number like this. Shrimp have 5 orders of magnitude fewer neurons than humans, so whether suffering is the result of a physical process or an information processing one, this implies that shrimp neurons do 4 orders of magnitude more of this process per second than human neurons.
epistemic status: Disagreeing on object-level topic, not the topic of EA epistemics.
I disagree, especially functionalism can justify a number like this. Here’s an example for reasoning on this:
Suffering is the structure of some computation, and different levels of suffering correspond to different variants of that computation.
What matters is whether that computation is happening.
The structure of suffering is simple enough to be represented in the neurons of a shrimp.
Under that view, shrimp can absolutely suffer in the same range as humans, and the amount of suffering is dependent on crossing some threshold of number of neurons. One might argue that higher levels of suffering require computations with higher complexity, but intuitively I don’t buy this—more/purer suffering appears less complicated to me, on introspection (just as higher/purer pleasure appears less complicated as well.)
I think I put a bunch of probability mass on a view like above.
(One might argue that it’s about the number of times the suffering computation is executed, not whether it’s present or not, but I find that view intuitively less plausible.)
You didn’t link the report and I’m not able to make it out from all of the Rethink Priorities moral weight research, so I can’t agree/disagree on the state of EA epistemics shown in there.
As to your point: this is one of the better arguments I’ve heard that welfare ranges might be similar between animals. Still I don’t think it squares well with the actual nature of the brain. Saying there’s a single suffering computation would make sense if the brain was like a CPU, where one core did the thinking, but actually all of the neurons in the brain are firing at once and doing computations in at the same time. So it makes much more sense to me to think that the more neurons are computing some sort of suffering, the greater the intensity of suffering.
One intuition against this is by drawing an analogy to LLMs: the residual stream represents many features. All neurons participate in the representation of a feature. But the difference between a larger and a smaller model is mostly that the larger model can represent more features, not that the larger model represents features with greater magnitude.
In humans it seems to be the case that consciousness is most strongly connected to processes in the brain stem, rather than the neo cortex. Here is a great talk about the topic—the main points are (writing from memory, might not be entirely accurate):
humans can lose consciousness or produce intense emotions (good and bad) through interventions on a very small area of the brain stem. When other much larger parts of the brain are damaged or missing, humans continue to behave in a way such that one would ascribe emotions to them from interactions, for example, they show affection.
dopamin, serotonin, and other chemicals that alter consciousness work in the brain stem
If we consider the question from an evolutionary angle, I’d also argue that emotions are more important when an organism has fewer alternatives (like a large brain that does fancy computations). Once better reasoning skills become available, it makes sense to reduce the impact that emotions have on behavior and instead trust the abstract reasoning. In my own experience, the intensity in which I feel emotions is strongly correlated to how action guiding it is, and I think as a child I felt emotions more intensly than now, which also fits the hypothesis that more ability to think abstract reduces intensity of emotions.
I agree with you that the “structure of suffering” is likely to be represented in the neurons of shrimp. I think it’s clear that shrimps may “suffer” in the sense that they react to pain, move away from sources of pain, would prefer to be in a painless state rather than a painful state, etc.
But where I diverge from the conclusions drawn by Rethink Priorities is that I believe shrimp are less “conscious” (for a lack of a better word) than humans and less their suffering matters less. Though shrimp show outward signs of pain, I sincerely doubt that with just 100,000 neurons there’s much of a subjective experience going on there. This is purely intuitive, and I’m not sure of the specific neuroscience of shrimp brains or Rethink Priorities arguments against this. But it seems to me that the “level of consciousness” animals have sit on an axis that’s roughly correlated with neuron count; with humans elephants at the top to C. elegans at the bottom.
Another analogy I’ll throw out is that humans can react to pain unconsciously. If you put your hand on a hot stove, you will reactively pull your hand away before the feeling of pain enters your conscious perception. I’d guess shrimp pain response works a similar way, largely unconscious processing do to their very low neuron count.
“In regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight;
Many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; and
There is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.
Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely”
This hardly seems an argument against the one in the shortform, namely
Neither a physicalist nor a functionalist theory of consciousness can reasonably justify a number like this. Shrimp have 5 orders of magnitude fewer neurons than humans, so whether suffering is the result of a physical process or an information processing one, this implies that shrimp neurons do 4 orders of magnitude more of this process per second than human neurons. The authors get around this by refusing to stake themselves on any theory of consciousness.
If the original authors never thought of this that seems on them.
From Rethink Priorities:
Now with the disclaimer that I do think that RP are doing good and important work and are one of the few organizations seriously thinking about animal welfare priorities...
Their epistemics led them to do a Monte Carlo simulation to determine if organisms are capable of suffering (and if so, how much) then got a value of 5 shrimp = 1 human and then not bat an eye at this number.
Neither a physicalist nor a functionalist theory of consciousness can reasonably justify a number like this. Shrimp have 5 orders of magnitude fewer neurons than humans, so whether suffering is the result of a physical process or an information processing one, this implies that shrimp neurons do 4 orders of magnitude more of this process per second than human neurons. The authors get around this by refusing to stake themselves on any theory of consciousness.
The overall structure of the RP welfare range report, does not cut to the truth, instead the core mental motion seems to be to engage with as many existing piece of work as possible; credence is doled out to different schools of thought and pieces of evidence in a way which seems more like appeasement, lip-service, or a “well these guys have done some work, who are we disrespect them by ignoring it” attitude. Removal of noise is one of the most important functions of meta-analysis, and it is largely absent.
The result of this is an epistemology where the accuracy of a piece of work is a monotonically increasing function of the number of sources, theories, and lines of argument. Which is fine if your desired output is a very long Google doc, and a disclaimer to yourself (and, more cynically, your funders) that “No no, we did everything right, we reviewed all the evidence and took it all into account.” but it’s pretty bad if you want to actually be correct.
I grow increasingly convinced that the epistemics of EA are not especially good, worsening, and already insufficient to work on the relatively low-stakes and easy issue of animal welfare (as compared to AI x-risk).
epistemic status: Disagreeing on object-level topic, not the topic of EA epistemics.
I disagree, especially functionalism can justify a number like this. Here’s an example for reasoning on this:
Suffering is the structure of some computation, and different levels of suffering correspond to different variants of that computation.
What matters is whether that computation is happening.
The structure of suffering is simple enough to be represented in the neurons of a shrimp.
Under that view, shrimp can absolutely suffer in the same range as humans, and the amount of suffering is dependent on crossing some threshold of number of neurons. One might argue that higher levels of suffering require computations with higher complexity, but intuitively I don’t buy this—more/purer suffering appears less complicated to me, on introspection (just as higher/purer pleasure appears less complicated as well.)
I think I put a bunch of probability mass on a view like above.
(One might argue that it’s about the number of times the suffering computation is executed, not whether it’s present or not, but I find that view intuitively less plausible.)
You didn’t link the report and I’m not able to make it out from all of the Rethink Priorities moral weight research, so I can’t agree/disagree on the state of EA epistemics shown in there.
I have added a link to the report now.
As to your point: this is one of the better arguments I’ve heard that welfare ranges might be similar between animals. Still I don’t think it squares well with the actual nature of the brain. Saying there’s a single suffering computation would make sense if the brain was like a CPU, where one core did the thinking, but actually all of the neurons in the brain are firing at once and doing computations in at the same time. So it makes much more sense to me to think that the more neurons are computing some sort of suffering, the greater the intensity of suffering.
Can you elaborate how
leads to
?
One intuition against this is by drawing an analogy to LLMs: the residual stream represents many features. All neurons participate in the representation of a feature. But the difference between a larger and a smaller model is mostly that the larger model can represent more features, not that the larger model represents features with greater magnitude.
In humans it seems to be the case that consciousness is most strongly connected to processes in the brain stem, rather than the neo cortex. Here is a great talk about the topic—the main points are (writing from memory, might not be entirely accurate):
humans can lose consciousness or produce intense emotions (good and bad) through interventions on a very small area of the brain stem. When other much larger parts of the brain are damaged or missing, humans continue to behave in a way such that one would ascribe emotions to them from interactions, for example, they show affection.
dopamin, serotonin, and other chemicals that alter consciousness work in the brain stem
If we consider the question from an evolutionary angle, I’d also argue that emotions are more important when an organism has fewer alternatives (like a large brain that does fancy computations). Once better reasoning skills become available, it makes sense to reduce the impact that emotions have on behavior and instead trust the abstract reasoning. In my own experience, the intensity in which I feel emotions is strongly correlated to how action guiding it is, and I think as a child I felt emotions more intensly than now, which also fits the hypothesis that more ability to think abstract reduces intensity of emotions.
I agree with you that the “structure of suffering” is likely to be represented in the neurons of shrimp. I think it’s clear that shrimps may “suffer” in the sense that they react to pain, move away from sources of pain, would prefer to be in a painless state rather than a painful state, etc.
But where I diverge from the conclusions drawn by Rethink Priorities is that I believe shrimp are less “conscious” (for a lack of a better word) than humans and less their suffering matters less. Though shrimp show outward signs of pain, I sincerely doubt that with just 100,000 neurons there’s much of a subjective experience going on there. This is purely intuitive, and I’m not sure of the specific neuroscience of shrimp brains or Rethink Priorities arguments against this. But it seems to me that the “level of consciousness” animals have sit on an axis that’s roughly correlated with neuron count; with humans elephants at the top to C. elegans at the bottom.
Another analogy I’ll throw out is that humans can react to pain unconsciously. If you put your hand on a hot stove, you will reactively pull your hand away before the feeling of pain enters your conscious perception. I’d guess shrimp pain response works a similar way, largely unconscious processing do to their very low neuron count.
Can you link to where RP says that?
Good point, edited a link to the Google Doc into the post.
Your disagreement, from what I understand, seems mostly to stem from the fact that shrimps have less neuron than humans.
Did you check RP’s piece on that topic, “Why Neuron Counts Shouldn’t Be Used as Proxies for Moral Weight?”
https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/why-neuron-counts-shouldn-t-be-used-as-proxies-for-moral
They say this:
“In regards to intelligence, we can question both the extent to which more neurons are correlated with intelligence and whether more intelligence in fact predicts greater moral weight;
Many ways of arguing that more neurons results in more valenced consciousness seem incompatible with our current understanding of how the brain is likely to work; and
There is no straightforward empirical evidence or compelling conceptual arguments indicating that relative differences in neuron counts within or between species reliably predicts welfare relevant functional capacities.
Overall, we suggest that neuron counts should not be used as a sole proxy for moral weight, but cannot be dismissed entirely”
This hardly seems an argument against the one in the shortform, namely
If the original authors never thought of this that seems on them.