Well I pre-theoretically care about happiness and suffering too. I hate suffering, and I hate inflicting suffering or knowing others are suffering. I like being happy, and like making others happy or knowing they’re happy. So it’s not really a process of teasing out, it’s a process of boiling down, by asking myself which things seem to matter intrinsically and which instrumentally. One way of doing this is to consider hypothetical situations, and selectively vary them and observe the difference each variation makes to my assessment of the situation. (edit: so that’s one place the ‘teasing out’ happens—I’ll work out what value set X implies about hypothetical scenarios a, b, and c, and see if I’m happy to endorse those implications. It’s probably roughly what Rawls meant by ‘reflective equilibrium’—induce principles, deduce their implications, repeat until you’re more or less satisfied.)
Basically, conscious states are the only things I have direct access to, and I ‘know’ (in a way that I couldn’t argue someone else into accepting, if they didn’t perceive it directly, but that is more obvious to me than just about anything else) that some of them are good and some of them are bad. Via emotional empathy and intellectual awareness of apparently relevant similarities, I deduce that other people and animals have a similar capacity for conscious experience, and that it’s good when they have pleasant experiences and bad when they have unpleasant ones. (edit: and these convictions are the ones I remain sure of, at the end of the boiling-down/reflective equilibrium process)
I think I’ll bow out of the discussion now—I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
>Well I pre-theoretically care about happiness and suffering too.
That you think this, and that it might be the case, for the record, wasn’t previously obvious to me, and makes a notch more sense out of the discussion.
For example, it makes me curious as to whether, when observing say a pre-civilization group of humans, I’d end up wanting to describe them as caring about happiness and suffering, beyond caring about various non-emotional things.
Ok, actually I can see a non-Goodharting reason to care about emotional states as such, though it’s still instrumental, so isn’t what tslarm was talking about: emotional states are blunt-force brain events, and so in a context (e.g. modern life) where the locality of emotions doesn’t fit into the locality of the demands of life, emotions are disruptive, especially suffering, or maybe more subtly any lack of happiness.
Ok, thanks for engaging. Be well. Or I guess, be happy and unsufferful.
>I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
I don’t want to poke you more and risk making you engage when you don’t want to, but just as a signpost for future people, I’ll note that I don’t recognize this as describing what happened (except of course that you felt what you say you felt, and that’s evidence that I’m wrong about what happened).
Well I pre-theoretically care about happiness and suffering too. I hate suffering, and I hate inflicting suffering or knowing others are suffering. I like being happy, and like making others happy or knowing they’re happy. So it’s not really a process of teasing out, it’s a process of boiling down, by asking myself which things seem to matter intrinsically and which instrumentally. One way of doing this is to consider hypothetical situations, and selectively vary them and observe the difference each variation makes to my assessment of the situation. (edit: so that’s one place the ‘teasing out’ happens—I’ll work out what value set X implies about hypothetical scenarios a, b, and c, and see if I’m happy to endorse those implications. It’s probably roughly what Rawls meant by ‘reflective equilibrium’—induce principles, deduce their implications, repeat until you’re more or less satisfied.)
Basically, conscious states are the only things I have direct access to, and I ‘know’ (in a way that I couldn’t argue someone else into accepting, if they didn’t perceive it directly, but that is more obvious to me than just about anything else) that some of them are good and some of them are bad. Via emotional empathy and intellectual awareness of apparently relevant similarities, I deduce that other people and animals have a similar capacity for conscious experience, and that it’s good when they have pleasant experiences and bad when they have unpleasant ones. (edit: and these convictions are the ones I remain sure of, at the end of the boiling-down/reflective equilibrium process)
I think I’ll bow out of the discussion now—I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
>Well I pre-theoretically care about happiness and suffering too.
That you think this, and that it might be the case, for the record, wasn’t previously obvious to me, and makes a notch more sense out of the discussion.
For example, it makes me curious as to whether, when observing say a pre-civilization group of humans, I’d end up wanting to describe them as caring about happiness and suffering, beyond caring about various non-emotional things.
Ok, actually I can see a non-Goodharting reason to care about emotional states as such, though it’s still instrumental, so isn’t what tslarm was talking about: emotional states are blunt-force brain events, and so in a context (e.g. modern life) where the locality of emotions doesn’t fit into the locality of the demands of life, emotions are disruptive, especially suffering, or maybe more subtly any lack of happiness.
>I think I’ll bow out of the discussion now
Ok, thanks for engaging. Be well. Or I guess, be happy and unsufferful.
>I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
I don’t want to poke you more and risk making you engage when you don’t want to, but just as a signpost for future people, I’ll note that I don’t recognize this as describing what happened (except of course that you felt what you say you felt, and that’s evidence that I’m wrong about what happened).
Cheers. I won’t plug you into the experience machine if you don’t sign me up for cryonics :)
Deal! I’m glad we can realize gains from trade across metaphysical chasms.