Their epistemics led them to do a Monte Carlo simulation to determine if organisms are capable of suffering (and if so, how much) then got a value of 5 shrimp = 1 human and then not bat an eye at this number.
Neither a physicalist nor a functionalist theory of consciousness can reasonably justify a number like this. Shrimp have 5 orders of magnitude fewer neurons than humans, so whether suffering is the result of a physical process or an information processing one, this implies that shrimp neurons do 4 orders of magnitude more of this process per second than human neurons.
epistemic status: Disagreeing on object-level topic, not the topic of EA epistemics.
I disagree, especially functionalism can justify a number like this. Here’s an example for reasoning on this:
Suffering is the structure of some computation, and different levels of suffering correspond to different variants of that computation.
What matters is whether that computation is happening.
The structure of suffering is simple enough to be represented in the neurons of a shrimp.
Under that view, shrimp can absolutely suffer in the same range as humans, and the amount of suffering is dependent on crossing some threshold of number of neurons. One might argue that higher levels of suffering require computations with higher complexity, but intuitively I don’t buy this—more/purer suffering appears less complicated to me, on introspection (just as higher/purer pleasure appears less complicated as well.)
I think I put a bunch of probability mass on a view like above.
(One might argue that it’s about the number of times the suffering computation is executed, not whether it’s present or not, but I find that view intuitively less plausible.)
You didn’t link the report and I’m not able to make it out from all of the Rethink Priorities moral weight research, so I can’t agree/disagree on the state of EA epistemics shown in there.
As to your point: this is one of the better arguments I’ve heard that welfare ranges might be similar between animals. Still I don’t think it squares well with the actual nature of the brain. Saying there’s a single suffering computation would make sense if the brain was like a CPU, where one core did the thinking, but actually all of the neurons in the brain are firing at once and doing computations in at the same time. So it makes much more sense to me to think that the more neurons are computing some sort of suffering, the greater the intensity of suffering.
One intuition against this is by drawing an analogy to LLMs: the residual stream represents many features. All neurons participate in the representation of a feature. But the difference between a larger and a smaller model is mostly that the larger model can represent more features, not that the larger model represents features with greater magnitude.
In humans it seems to be the case that consciousness is most strongly connected to processes in the brain stem, rather than the neo cortex. Here is a great talk about the topic—the main points are (writing from memory, might not be entirely accurate):
humans can lose consciousness or produce intense emotions (good and bad) through interventions on a very small area of the brain stem. When other much larger parts of the brain are damaged or missing, humans continue to behave in a way such that one would ascribe emotions to them from interactions, for example, they show affection.
dopamin, serotonin, and other chemicals that alter consciousness work in the brain stem
If we consider the question from an evolutionary angle, I’d also argue that emotions are more important when an organism has fewer alternatives (like a large brain that does fancy computations). Once better reasoning skills become available, it makes sense to reduce the impact that emotions have on behavior and instead trust the abstract reasoning. In my own experience, the intensity in which I feel emotions is strongly correlated to how action guiding it is, and I think as a child I felt emotions more intensly than now, which also fits the hypothesis that more ability to think abstract reduces intensity of emotions.
I agree with you that the “structure of suffering” is likely to be represented in the neurons of shrimp. I think it’s clear that shrimps may “suffer” in the sense that they react to pain, move away from sources of pain, would prefer to be in a painless state rather than a painful state, etc.
But where I diverge from the conclusions drawn by Rethink Priorities is that I believe shrimp are less “conscious” (for a lack of a better word) than humans and less their suffering matters less. Though shrimp show outward signs of pain, I sincerely doubt that with just 100,000 neurons there’s much of a subjective experience going on there. This is purely intuitive, and I’m not sure of the specific neuroscience of shrimp brains or Rethink Priorities arguments against this. But it seems to me that the “level of consciousness” animals have sit on an axis that’s roughly correlated with neuron count; with humans elephants at the top to C. elegans at the bottom.
Another analogy I’ll throw out is that humans can react to pain unconsciously. If you put your hand on a hot stove, you will reactively pull your hand away before the feeling of pain enters your conscious perception. I’d guess shrimp pain response works a similar way, largely unconscious processing do to their very low neuron count.
epistemic status: Disagreeing on object-level topic, not the topic of EA epistemics.
I disagree, especially functionalism can justify a number like this. Here’s an example for reasoning on this:
Suffering is the structure of some computation, and different levels of suffering correspond to different variants of that computation.
What matters is whether that computation is happening.
The structure of suffering is simple enough to be represented in the neurons of a shrimp.
Under that view, shrimp can absolutely suffer in the same range as humans, and the amount of suffering is dependent on crossing some threshold of number of neurons. One might argue that higher levels of suffering require computations with higher complexity, but intuitively I don’t buy this—more/purer suffering appears less complicated to me, on introspection (just as higher/purer pleasure appears less complicated as well.)
I think I put a bunch of probability mass on a view like above.
(One might argue that it’s about the number of times the suffering computation is executed, not whether it’s present or not, but I find that view intuitively less plausible.)
You didn’t link the report and I’m not able to make it out from all of the Rethink Priorities moral weight research, so I can’t agree/disagree on the state of EA epistemics shown in there.
I have added a link to the report now.
As to your point: this is one of the better arguments I’ve heard that welfare ranges might be similar between animals. Still I don’t think it squares well with the actual nature of the brain. Saying there’s a single suffering computation would make sense if the brain was like a CPU, where one core did the thinking, but actually all of the neurons in the brain are firing at once and doing computations in at the same time. So it makes much more sense to me to think that the more neurons are computing some sort of suffering, the greater the intensity of suffering.
One intuition against this is by drawing an analogy to LLMs: the residual stream represents many features. All neurons participate in the representation of a feature. But the difference between a larger and a smaller model is mostly that the larger model can represent more features, not that the larger model represents features with greater magnitude.
In humans it seems to be the case that consciousness is most strongly connected to processes in the brain stem, rather than the neo cortex. Here is a great talk about the topic—the main points are (writing from memory, might not be entirely accurate):
humans can lose consciousness or produce intense emotions (good and bad) through interventions on a very small area of the brain stem. When other much larger parts of the brain are damaged or missing, humans continue to behave in a way such that one would ascribe emotions to them from interactions, for example, they show affection.
dopamin, serotonin, and other chemicals that alter consciousness work in the brain stem
If we consider the question from an evolutionary angle, I’d also argue that emotions are more important when an organism has fewer alternatives (like a large brain that does fancy computations). Once better reasoning skills become available, it makes sense to reduce the impact that emotions have on behavior and instead trust the abstract reasoning. In my own experience, the intensity in which I feel emotions is strongly correlated to how action guiding it is, and I think as a child I felt emotions more intensly than now, which also fits the hypothesis that more ability to think abstract reduces intensity of emotions.
Can you elaborate how
leads to
?
I agree with you that the “structure of suffering” is likely to be represented in the neurons of shrimp. I think it’s clear that shrimps may “suffer” in the sense that they react to pain, move away from sources of pain, would prefer to be in a painless state rather than a painful state, etc.
But where I diverge from the conclusions drawn by Rethink Priorities is that I believe shrimp are less “conscious” (for a lack of a better word) than humans and less their suffering matters less. Though shrimp show outward signs of pain, I sincerely doubt that with just 100,000 neurons there’s much of a subjective experience going on there. This is purely intuitive, and I’m not sure of the specific neuroscience of shrimp brains or Rethink Priorities arguments against this. But it seems to me that the “level of consciousness” animals have sit on an axis that’s roughly correlated with neuron count; with humans elephants at the top to C. elegans at the bottom.
Another analogy I’ll throw out is that humans can react to pain unconsciously. If you put your hand on a hot stove, you will reactively pull your hand away before the feeling of pain enters your conscious perception. I’d guess shrimp pain response works a similar way, largely unconscious processing do to their very low neuron count.