Some people in the EA community have already written a bit about this.
I think this is the kind of thing Mike Johnson (/user/johnsonmx) and Andres Gomez Emilsson (/user/algekalipso) of the Qualia Research Institute are interested in, though they probably take a different approach. See:
The Foundational Research Institute also takes an interest in the issue, but they tend to advocate an eliminativist, subjectivist view according to which there is no way to objectively determine which beings are conscious because consciousness itself is an essentially contested concept. (I don’t know if everyone at FRI agrees with that, but at least a few including Brian Tomasik do.) FRI also has done some work on measuring happiness and suffering.
Animal Charity Evaluators announced in 2016 that they were starting a deep investigation of animal sentience. I don’t know if they have done anything since then.
Luke Muehlhauser (/u/lukeprog) wrote an extensive report on consciousness for the Open Philanthropy Project. He has also indicated an interest in further exploring the area of sentience and moral weight. Since phenomenal consciousness is necessary to experience either happiness or suffering, this may fall under the same umbrella as the above research. Lukeprog’s LW posts on affective neuroscience are relevant as well (as well as a couple by Yvain).
This is great info, but it’s about a different angle from what I’d like to see.
(I now realise it is totally impossible to infer my angle from my post, so here goes)
I want to describe the causes of happiness with the intentional stance. That is, I want to explain them in terms of beliefs, feelings and intentions.
For example, it seems very relevant that (allegedly) suffering is a result of attachment to outcomes, but I haven’t heard any rationalists talk about this.
A novice, upon observing a brain scan, said: “Two neural pathways bring signals to the limbic system. One of them is right, and the other one is wrong.”
An old master listened to his words, and said: “What is the true nature of the neural pathway that brought you to this conclusion? Monkey, riding an elephant! Which one of them has more Buddha nature?”
The novice was not enlightened, but peer pressure made him pretend he was. The master then collected tuition fees from all bystanders.
Later in the evening, the master took a pain pill, opened Less Wrong on his smartphone, and wrote:
Chronic back pain tortures my body I put an electrode in my brain Sakura petals flowing in the breeze
people exist that don’t suffer from physical damage because they don’t identify with their physical body
Just to clarify: you don’t mean that they don’t get physical damage, you mean they don’t mind getting physical damage?
Do they, then, not bother doing anything to fix any physical damage they incur? That doesn’t seem like it’s obviously a good tradeoff.
It seems like what you actually want is, roughly, (1) not to feel pain, (2) to be aware of damage, (3) to prefer not to get damaged, and (4) for that preference not to lead to distress when damage occurs. It sounds as if the people you’re talking about have managed 2 and 4 but not 1, and on the face of it their way of dealing with 4 seems like it would (if it actually works) break 3.
Some people in the EA community have already written a bit about this.
I think this is the kind of thing Mike Johnson (/user/johnsonmx) and Andres Gomez Emilsson (/user/algekalipso) of the Qualia Research Institute are interested in, though they probably take a different approach. See:
Effective Altruism, and building a better QALY
Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk
The Foundational Research Institute also takes an interest in the issue, but they tend to advocate an eliminativist, subjectivist view according to which there is no way to objectively determine which beings are conscious because consciousness itself is an essentially contested concept. (I don’t know if everyone at FRI agrees with that, but at least a few including Brian Tomasik do.) FRI also has done some work on measuring happiness and suffering.
Animal Charity Evaluators announced in 2016 that they were starting a deep investigation of animal sentience. I don’t know if they have done anything since then.
Luke Muehlhauser (/u/lukeprog) wrote an extensive report on consciousness for the Open Philanthropy Project. He has also indicated an interest in further exploring the area of sentience and moral weight. Since phenomenal consciousness is necessary to experience either happiness or suffering, this may fall under the same umbrella as the above research. Lukeprog’s LW posts on affective neuroscience are relevant as well (as well as a couple by Yvain).
This is great info, but it’s about a different angle from what I’d like to see.
(I now realise it is totally impossible to infer my angle from my post, so here goes)
I want to describe the causes of happiness with the intentional stance. That is, I want to explain them in terms of beliefs, feelings and intentions.
For example, it seems very relevant that (allegedly) suffering is a result of attachment to outcomes, but I haven’t heard any rationalists talk about this.
How is physical torture (or chronic back pain) the result of attachment to outcomes?
This is a rather extreme case, but people exist that don’t suffer from physical damage because they don’t identify with their physical body.
Granted, it would take a good 20 years of meditation/brainwashing to get to that state and it’s probably not worth it for now.
Luckily many forms of suffering are based on more shallow beliefs
That seems a bit like arguing for wireheading as a solution to your problems.
A novice, upon observing a brain scan, said: “Two neural pathways bring signals to the limbic system. One of them is right, and the other one is wrong.”
An old master listened to his words, and said: “What is the true nature of the neural pathway that brought you to this conclusion? Monkey, riding an elephant! Which one of them has more Buddha nature?”
The novice was not enlightened, but peer pressure made him pretend he was. The master then collected tuition fees from all bystanders.
Later in the evening, the master took a pain pill, opened Less Wrong on his smartphone, and wrote:
Chronic back pain tortures my body
I put an electrode in my brain
Sakura petals flowing in the breeze
Just to clarify: you don’t mean that they don’t get physical damage, you mean they don’t mind getting physical damage?
Do they, then, not bother doing anything to fix any physical damage they incur? That doesn’t seem like it’s obviously a good tradeoff.
It seems like what you actually want is, roughly, (1) not to feel pain, (2) to be aware of damage, (3) to prefer not to get damaged, and (4) for that preference not to lead to distress when damage occurs. It sounds as if the people you’re talking about have managed 2 and 4 but not 1, and on the face of it their way of dealing with 4 seems like it would (if it actually works) break 3.