1) The question is whether they can experience the subjective realisation of, “Because of this situation, I am experiencing negative emotions. I dislike this situation, but there is no escape,” and thus increase their suffering by adding negative internal stimuli—appreciation and awareness of their existence—to already existing negative external stimuli. This is a stricter condition some may have for caring about other creatures to an inconvenient degree. For a fictional example, Methods!Harry refused to eat anything when he considered the possibility that all other life is sentient. To be charitable, assume he is aware that pinching a rabbit’s leg will trigger afferent nociceptive (pain) neurons, which will carry a signal to the brain, leading to the experience of pain. Your cited research demonstrates this. It does not demonstrate, however, whether the subject has the awareness to reflect upon the factors that contribute to their suffering, such that their reflection can contribute it by further adding negative stimuli, negative stimuli that is generated only by that organism’s selfsame reflection. Causing misery to a probably non-sentient creature did not give Methods!Harry hesitation, but causing misery to a probably sentient creature did; hopefully this helps elucidate the mindset of one ascribing to this stricter condition of care.
2) If a human considers that they themselves satisfy the above condition, then they will be more inclined to attribute more worth to fellow humans than other creatures of a dubious status. That said, they will still realise that misery is not a pleasant experience regardless of one’s capacity for self-reflection, and should be prevented and stopped if possible. One must thus argue to this person that it should behove their moral selves to exert effort towards mitigating or decreasing that misery, and that the exertion will not detriment this person’s endeavours to reduce the misery of humans.
This person cares more about optimising the good they can achieve while living, which leads them to take pains to live longer; the longer they live, the more good they can achieve. One must convince this person that either non-human animals have the capacity for self-reflection to the degree specified above, or that caring about the misery of non-human animals and acting upon that care does not adversely affect their net ability to introduce good to the world; id est, in the latter condition, acting upon that care must not adversely affect this person’s lifespan, quality of life, capacity to help humans, or must only do so by small enough margin to justify the sacrifice.
These are things I think a rational agent making a comfortable salary should think about, assuming they desire to optimise the quantity of good they effect in the world. To someone whose objective is convincing the masses to do the most good they possibly can, this doesn’t matter, as arguing for both vegetarianism and giving substantial sums to the AMF only have a potential conflict of interest to the party seeking optimal quality of life and greatest possible life-span.
To be charitable, assume he is aware that pinching a rabbit’s leg will trigger afferent nociceptive (pain) neurons, which will carry a signal to the brain, leading to the experience of pain. Your cited research demonstrates this. It does not demonstrate, however, whether the subject has the awareness to reflect upon the factors that contribute to their suffering, such that their reflection can contribute it by further adding negative stimuli, negative stimuli that is generated only by that organism’s selfsame reflection.
To be fair, you can’t demonstrate this for any human either. That’s the problem with consciousness.
1) The question is whether they can experience the subjective realisation of, “Because of this situation, I am experiencing negative emotions. I dislike this situation, but there is no escape,” and thus increase their suffering by adding negative internal stimuli—appreciation and awareness of their existence—to already existing negative external stimuli. This is a stricter condition some may have for caring about other creatures to an inconvenient degree. For a fictional example, Methods!Harry refused to eat anything when he considered the possibility that all other life is sentient. To be charitable, assume he is aware that pinching a rabbit’s leg will trigger afferent nociceptive (pain) neurons, which will carry a signal to the brain, leading to the experience of pain. Your cited research demonstrates this. It does not demonstrate, however, whether the subject has the awareness to reflect upon the factors that contribute to their suffering, such that their reflection can contribute it by further adding negative stimuli, negative stimuli that is generated only by that organism’s selfsame reflection. Causing misery to a probably non-sentient creature did not give Methods!Harry hesitation, but causing misery to a probably sentient creature did; hopefully this helps elucidate the mindset of one ascribing to this stricter condition of care.
2) If a human considers that they themselves satisfy the above condition, then they will be more inclined to attribute more worth to fellow humans than other creatures of a dubious status. That said, they will still realise that misery is not a pleasant experience regardless of one’s capacity for self-reflection, and should be prevented and stopped if possible. One must thus argue to this person that it should behove their moral selves to exert effort towards mitigating or decreasing that misery, and that the exertion will not detriment this person’s endeavours to reduce the misery of humans.
This person cares more about optimising the good they can achieve while living, which leads them to take pains to live longer; the longer they live, the more good they can achieve. One must convince this person that either non-human animals have the capacity for self-reflection to the degree specified above, or that caring about the misery of non-human animals and acting upon that care does not adversely affect their net ability to introduce good to the world; id est, in the latter condition, acting upon that care must not adversely affect this person’s lifespan, quality of life, capacity to help humans, or must only do so by small enough margin to justify the sacrifice.
These are things I think a rational agent making a comfortable salary should think about, assuming they desire to optimise the quantity of good they effect in the world. To someone whose objective is convincing the masses to do the most good they possibly can, this doesn’t matter, as arguing for both vegetarianism and giving substantial sums to the AMF only have a potential conflict of interest to the party seeking optimal quality of life and greatest possible life-span.
To be fair, you can’t demonstrate this for any human either. That’s the problem with consciousness.
Naturally; we’re working from the same fabric.