Interesting ideas. I wonder how this might integrate with more complex models of utility like the valence/arousal model.
Also I’ve used slightly different terminology than you do, but I wonder if you have any thoughts on how you might have to adjust your model if you account for suffering as identification with pain rather than pain itself (cf. what I’ve written about this here with links to additional, related work by FRI).
It may indeed be appropriate to use more than two axes or more nuance in general. I think the key idea shared by my simplistic model and the valence/arousal model is the notion that “good feelings” aren’t just “nega-bad feelings”. If you want to construct a preference ordering over subjective states—which is the point of using utility in the first place—then it pays dividends to really introspect on what your preferences are, and not treat wellbeing as a scalar quantity by default.
For example, you might actually prefer “talking to old friend, while suffering through a headache” to “missing out on talking to old friend, but don’t have a headache at all”. Or maybe not. It will depend on context.
Regarding suffering as identification with pain, I basically agree with your blog post. I do think that the human brain by default identifies with pain. I can “turn off” the suffering component of pain with intense concentration and directed, sustained attention. As soon as my attention wavers, the suffering returns. It’s exhausting and unsustainable. Perhaps extremely advanced meditators can permanently turn off suffering. I view this as a promising direction for investigation, but not necessarily in the short term.
I have wondered if animals aren’t actually perpetually walking around in something more like a blissful flow-state punctuated by extremely brief, suffering-free episodes of negative valence. Perhaps a cow, lacking reflectivity, can literally never suffer as much as I suffer by restraining myself from eating chocolate. I have very low confidence in these thoughts, though.
Yeah, non-human animals remain a tricky subject. For example, I’m pretty sure thermostats are minimally conscious in a technical sense, yet probably don’t suffer in any meaningful way because they have no way to experience pain as pain to the extent we allow pain to include things like negative-valence feedback (and what does “negative valence” even mean for a thermostat?). Yet somewhere along the way we get things conscious enough that we can suspect them of suffering the way we do, or suffering the way we do but to a lesser degree.
I like your thought that maybe humans, or generally more conscious processes (in the IIT sense that consciousness can be quantified), are capable of more suffering as it seems to line up with my expectation that things that experience themselves more have more opportunity to experience suffering. This has interesting implications too for the potential suffering of AIs and other future things which may be more conscious than anything that presently exists.
Interesting ideas. I wonder how this might integrate with more complex models of utility like the valence/arousal model.
Also I’ve used slightly different terminology than you do, but I wonder if you have any thoughts on how you might have to adjust your model if you account for suffering as identification with pain rather than pain itself (cf. what I’ve written about this here with links to additional, related work by FRI).
Thanks for sharing those links.
It may indeed be appropriate to use more than two axes or more nuance in general. I think the key idea shared by my simplistic model and the valence/arousal model is the notion that “good feelings” aren’t just “nega-bad feelings”. If you want to construct a preference ordering over subjective states—which is the point of using utility in the first place—then it pays dividends to really introspect on what your preferences are, and not treat wellbeing as a scalar quantity by default.
For example, you might actually prefer “talking to old friend, while suffering through a headache” to “missing out on talking to old friend, but don’t have a headache at all”. Or maybe not. It will depend on context.
Regarding suffering as identification with pain, I basically agree with your blog post. I do think that the human brain by default identifies with pain. I can “turn off” the suffering component of pain with intense concentration and directed, sustained attention. As soon as my attention wavers, the suffering returns. It’s exhausting and unsustainable. Perhaps extremely advanced meditators can permanently turn off suffering. I view this as a promising direction for investigation, but not necessarily in the short term.
I have wondered if animals aren’t actually perpetually walking around in something more like a blissful flow-state punctuated by extremely brief, suffering-free episodes of negative valence. Perhaps a cow, lacking reflectivity, can literally never suffer as much as I suffer by restraining myself from eating chocolate. I have very low confidence in these thoughts, though.
Yeah, non-human animals remain a tricky subject. For example, I’m pretty sure thermostats are minimally conscious in a technical sense, yet probably don’t suffer in any meaningful way because they have no way to experience pain as pain to the extent we allow pain to include things like negative-valence feedback (and what does “negative valence” even mean for a thermostat?). Yet somewhere along the way we get things conscious enough that we can suspect them of suffering the way we do, or suffering the way we do but to a lesser degree.
I like your thought that maybe humans, or generally more conscious processes (in the IIT sense that consciousness can be quantified), are capable of more suffering as it seems to line up with my expectation that things that experience themselves more have more opportunity to experience suffering. This has interesting implications too for the potential suffering of AIs and other future things which may be more conscious than anything that presently exists.