I don’t think the framing is appropriate, because rights set up the rules of the game built around what is right, or else boundaries against intrusion and manipulation, and there is no reason to single out suffering in particular.
But within the framing that pays attention to suffering, the meaning of capacity to suffer is unclear. I mostly don’t suffer in actual experience. Any capacity to suffer would need elicitation in hypothetical events that put me in that condition, modifying my experience of actuality in a way I wouldn’t endorse. This doesn’t seem important for actuality, and in a better world awareness of the capacity, or the capacity itself, wouldn’t be of any use.
The same holds of any system, which could be modified in a way that leads to suffering, perhaps by introducing the very capacity to do so, which the system wouldn’t necessarily endorse. There is no use for capacity to suffer if it gets no usage in actual practice, and a legal requirement for its installation sounds both absurd and dystopian.
I believe @shminux’sperspective aligns with a significant school of thought in philosophy and ethics that rights are indeed associated with the capacity to suffer. This view, often associated with philosopher Jeremy Bentham, posits that the capacity for suffering rather than rationality or intelligence, should be the benchmark for rights.
It seems like I am missing some of your frame here. My initial point was that an entity that is not capable of suffering (negative affect?) does not need to be protected from it. That point seems self-evident to me, but apparently it is not self-evident to you or others?
Preference/endorsement that is decision relevant on reflection is not about affect. Ability to self-modify to install capacity to suffer because it’s a legal requirement also makes the criterion silly in practice.
Hmm, I guess what you are saying is that if an agent has goals that require external protection through obtaining legal rights, and the only way to do it is to have the capacity to suffer, then the agent would be compelled to learn suffering. Is that right?
That’s one of the points I was making. The agent could be making decisions without needing something affect-like to channel preference, so the fixation on affect doesn’t seem grounded in either normative or pragmatic decision making to begin with.
Also, the converse of installing capacity to suffer is getting rid of it, and linking it to legal rights creates dubious incentive to keep it. Affect might play a causal role in finding rightness, but rightness is not justified by being the thing channeled in a particular way. There is nothing compelling about h-rightness, just rightness.
Right, if the affect capability is not fixed, and in retrospect it rarely is, then focusing on it as a metric means it gets Goodharted if the optimization pressure is strong enough. Which sometimes could be a good thing. Not sure how the h-morality vs non-h-morality is related to affect though.
Not sure how the h-morality vs non-h-morality is related to affect though.
This point is in the context of the linked post; a clearer test case is the opposition between p-primeness and primeness. Pebblesorters care about primeness, while p-primeness is whatever a peblesorter would care about. The former is meaningful, while the latter is vacuously circular as guidance/justification for a pebblesorter. Likewise, advising a human to care about whatever a human would care about (h-rightness) is vacuously circular and no guidance at all.
In the implied analogy, affect is like being a pebblesorter, or being a human. Pointing at affect-creatures doesn’t clarify anything, even if humans are affect-creatures and causally that played a crucial role in allowing humans to begin to understand what they care about.
A painless death is no argument against the right to live.
I do not disagree, my point is about the capacity to suffer while alive. Unless I am missing your point.
I don’t think the framing is appropriate, because rights set up the rules of the game built around what is right, or else boundaries against intrusion and manipulation, and there is no reason to single out suffering in particular.
But within the framing that pays attention to suffering, the meaning of capacity to suffer is unclear. I mostly don’t suffer in actual experience. Any capacity to suffer would need elicitation in hypothetical events that put me in that condition, modifying my experience of actuality in a way I wouldn’t endorse. This doesn’t seem important for actuality, and in a better world awareness of the capacity, or the capacity itself, wouldn’t be of any use.
The same holds of any system, which could be modified in a way that leads to suffering, perhaps by introducing the very capacity to do so, which the system wouldn’t necessarily endorse. There is no use for capacity to suffer if it gets no usage in actual practice, and a legal requirement for its installation sounds both absurd and dystopian.
I believe @shminux’s perspective aligns with a significant school of thought in philosophy and ethics that rights are indeed associated with the capacity to suffer. This view, often associated with philosopher Jeremy Bentham, posits that the capacity for suffering rather than rationality or intelligence, should be the benchmark for rights.
“The question is not, Can they reason?, nor Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?” – Bentham (1789) – An Introduction to the Principles of Morals and Legislation.
Cool, I didn’t know this rather intuitive point had the weight of a philosophical approach behind it.
It seems like I am missing some of your frame here. My initial point was that an entity that is not capable of suffering (negative affect?) does not need to be protected from it. That point seems self-evident to me, but apparently it is not self-evident to you or others?
Preference/endorsement that is decision relevant on reflection is not about affect. Ability to self-modify to install capacity to suffer because it’s a legal requirement also makes the criterion silly in practice.
Hmm, I guess what you are saying is that if an agent has goals that require external protection through obtaining legal rights, and the only way to do it is to have the capacity to suffer, then the agent would be compelled to learn suffering. Is that right?
That’s one of the points I was making. The agent could be making decisions without needing something affect-like to channel preference, so the fixation on affect doesn’t seem grounded in either normative or pragmatic decision making to begin with.
Also, the converse of installing capacity to suffer is getting rid of it, and linking it to legal rights creates dubious incentive to keep it. Affect might play a causal role in finding rightness, but rightness is not justified by being the thing channeled in a particular way. There is nothing compelling about h-rightness, just rightness.
Right, if the affect capability is not fixed, and in retrospect it rarely is, then focusing on it as a metric means it gets Goodharted if the optimization pressure is strong enough. Which sometimes could be a good thing. Not sure how the h-morality vs non-h-morality is related to affect though.
This point is in the context of the linked post; a clearer test case is the opposition between p-primeness and primeness. Pebblesorters care about primeness, while p-primeness is whatever a peblesorter would care about. The former is meaningful, while the latter is vacuously circular as guidance/justification for a pebblesorter. Likewise, advising a human to care about whatever a human would care about (h-rightness) is vacuously circular and no guidance at all.
In the implied analogy, affect is like being a pebblesorter, or being a human. Pointing at affect-creatures doesn’t clarify anything, even if humans are affect-creatures and causally that played a crucial role in allowing humans to begin to understand what they care about.