> I don’t see how it implies that I shouldn’t consider happiness to be a fundamentally, intrinsically good thing
Because it’s replacing the thing with your reaction to the thing. Does this make sense, as stated?
What I’m saying is, when we ask “what should I consider to be a fundamentally good thing”, we have nothing else to appeal to other than (the learned generalizations of) those things which our happiness comes from. Like, we’re asking for clarification about what our good-thing-detectors are aimed at. So I’m pointing out that, on the face of it, your stated fundamental values—happiness, non-suffering—are actually very very different from the pre-theoretic fundamental values—i.e. the things your good-thing-detectors detect, such as having kids, living, nuturing, connecting with people, understanding things, exploring, playing, creating, expressing, etc. Happiness is a mental event, those things are things that happen in the world or in relation to the world. Does this make sense? This feels like a fundamental point to me, and I’m not sure we’ve gotten shared clarity about this.
>I don’t see anything necessarily unreasonable about wanting everyone, including me, to experience the feeling they get when their ‘world getting better’ module is firing. (And seeing that feeling, rather than whatever triggers it, as the really important thing.)
I mean, it’s not “necessarily unreasonable”, in the sense of the orthogonality thesis of values—one could imagine an agent that coherently wants certain mental states to exist. I’m saying a weaker claim: it’s just not what you actually value. (Yes this is in some sense a rude claim, but I’m not sure what else to do, given that it’s how the world seems to me and it’s relevant and it would be more rude to pretend that’s not my current position. I don’t necessarily think you ought to engage with this as an argument, exactly. More like a hypothesis, which you could come to understand, and by understanding it you could come to recognize it as true or false of yourself; if you want to reject it before understanding it (not saying you’re doing that, just hypothetically) then I don’t see much to be gained by discussing it, though maybe it would help other people.) A reason I think it’s not actually what you value is that I suspect you wouldn’t press a button that would make everyone you love be super happy, with no suffering, and none of their material aims would be achieved (other than happiness), i.e. they wouldn’t explore or have kids, they wouldn’t play games or tell stories or make things, etc., or in general Live in any normal sense of the word; and you wouldn’t press a button like that for yourself. Would you?
Because it’s replacing the thing with your reaction to the thing. Does this make sense, as stated?
Not without an extra premise somewhere.
we’re asking for clarification about what our good-thing-detectors are aimed at
I think this is something we disagree on. It seems to me that one of your premises is “what is good = what our good-thing detectors are aimed at”, and I don’t share that premise. Or, to the extent that I do, the good-thing detector I privilege is different from the one you privilege; I see no reason to care more about my pre-theoretic good-thing detector than the ‘good-thing detector’ that is my whole process of moral and evaluative reflection and reasoning.
your stated fundamental values—happiness, non-suffering—are actually very very different from the pre-theoretic fundamental values—i.e. the things your good-thing-detectors detect, such as having kids, living, nuturing, connecting with people, understanding things, exploring, playing, creating, expressing, etc.
That’s the thing—I’m okay with that, and I still don’t see why I ought not to be.
Happiness is a mental event, those things are things that happen in the world or in relation to the world. Does this make sense?
Of course—and the mental events are the things that I think ultimately matter.
I’m saying a weaker claim: it’s just not what you actually value.
I think this is true for some definitions of value, so to some degree our disagreement here is semantic. But it also seems that we disagree about which senses of ‘value’ or ‘values’ are important. I have moral values that are not reducible to, or straightforwardly derivable from, the values you could infer from my behaviour. Like I said, I am imperfect by my own lights—my moral beliefs and judgments are one important input to my decision-making, but they’re not the only ones and they don’t always win. (In fact I’m not always even thinking on those terms; as I presume most people do, I spend a lot of my time more or less on autopilot. The autopilot was not programmed independently from my moral values, but nor is it simply an implementation (even an imperfect heuristic one) of them.)
A reason I think it’s not actually what you value is that I suspect you wouldn’t press a button that would make everyone you love be super happy, with no suffering, and none of their material aims would be achieved (other than happiness), i.e. they wouldn’t explore or have kids, they wouldn’t play games or tell stories or make things, etc., or in general Live in any normal sense of the word; and you wouldn’t press a button like that for yourself. Would you?
I’ve often thought about this sort of question, and honestly it’s hard to know which versions of wireheading/experience-machining I would or wouldn’t do. One reason is that in all realistic scenarios, I would distrust the technology and be terrified of the ways it might backfire. But also, I am well aware that I might hold back from doing what I believed I ought to do—perhaps especially with respect to other people, because I have a (healthy, in the real world) instinctive aversion to overriding other people’s autonomy even for their own good. Again though, the way I use these words, there is definitely no contradiction between the propositions “I believe state of the world X would be better”, “I believe I ought to make the world better where possible”, and “in reality I might not bring about state X even if I could”.
edit: FWIW on the concrete question you asked, IF I somehow had complete faith in the experience machine reliably working as advertised, and IF all my loved ones were enthusiastically on board with the idea, I reckon I would happily plug us all in. In reality they probably wouldn’t be, so I would have to choose between upsetting them terribly by doing it alone, or plugging them in against their wishes, and I reckon in that case I would probably end up doing neither and sticking with the status quo.
edit again: That idea of “complete faith” in the machine having no unexpected downsides is hard to fully internalise; in all realistic cases I would have at least some doubt, and that would make it easy for all the other pro-status-quo considerations to win out. But if I was truly 100% convinced that I could give myself and everyone else the best possible life, as far as all our conscious experiences were concerned? It would be really hard to rationalise a decision to pass that up. I still can’t imagine doing it to other people if they were begging me not to, but I think I would desperately try to convince them and be very upset when I inevitably failed. And if/when there was nobody left to be seriously hurt by my plugging myself in, and the option was still available to me, I think I’d do that.
The merely-lexical ambiguity is irrelevant of course. You responded to the top level post giving your reasons for not taking action re/ cryonics. So we’re just talking about whatever actually affects your behavior. I’m taking sides in your conflict, trying to talk to the part of you that wants to affect the world, against the part of you that wants to prevent you from trying to affect the world (by tricking your good-world-detectors).
>I see no reason to care more about my pre-theoretic good-thing detector than the ‘good-thing detector’ that is my whole process of moral and evaluative reflection and reasoning.
Reflection and reasoning, we can agree these things are good. I’m not attacking reason, I’m trying to implement reason by asking about the reasoning that you took to go from your pre-theoretic good-thing-detector to your post-theoretic good-thing judgements. I’m pointing out that there seems, prima facie, to be a huge divergence between these two. Do you see the apparent huge divergence? There could be a huge divergence without there being a mistake, that’s sort of the point of reason, to reach conclusions you didn’t know already. It’s just that I don’t at all see the reasoning that led you there, and it still seems to have produced wrong conclusions. So my question is, what was the reasoning that brought you to the conclusion that, despite what your pre-theoretic good-thing-detectors are aimed at (play, life, etc.), actually what’s a good thing is happiness (contra life)? So far I don’t think you’ve described that reasoning, only stated that its result is that you value happiness. (Which is fine, I haven’t asked so explicitly, and maybe it’s hard to describe.)
The ‘reasoning’ is basically just teasing out implications, checking for contradictions, that sort of thing. The ‘reflection’ includes what could probably be described as a bunch of appeals to intuition. I don’t think I can explain or justify those in a particularly interesting or useful way; but I will restate that I can only assume you’re doing the same thing at some point.
How, in broad strokes, does one tease out the implication that one cares mainly about happiness and suffering, from the pre-theoretic caring about kids, life, play, etc.?
Well I pre-theoretically care about happiness and suffering too. I hate suffering, and I hate inflicting suffering or knowing others are suffering. I like being happy, and like making others happy or knowing they’re happy. So it’s not really a process of teasing out, it’s a process of boiling down, by asking myself which things seem to matter intrinsically and which instrumentally. One way of doing this is to consider hypothetical situations, and selectively vary them and observe the difference each variation makes to my assessment of the situation. (edit: so that’s one place the ‘teasing out’ happens—I’ll work out what value set X implies about hypothetical scenarios a, b, and c, and see if I’m happy to endorse those implications. It’s probably roughly what Rawls meant by ‘reflective equilibrium’—induce principles, deduce their implications, repeat until you’re more or less satisfied.)
Basically, conscious states are the only things I have direct access to, and I ‘know’ (in a way that I couldn’t argue someone else into accepting, if they didn’t perceive it directly, but that is more obvious to me than just about anything else) that some of them are good and some of them are bad. Via emotional empathy and intellectual awareness of apparently relevant similarities, I deduce that other people and animals have a similar capacity for conscious experience, and that it’s good when they have pleasant experiences and bad when they have unpleasant ones. (edit: and these convictions are the ones I remain sure of, at the end of the boiling-down/reflective equilibrium process)
I think I’ll bow out of the discussion now—I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
>Well I pre-theoretically care about happiness and suffering too.
That you think this, and that it might be the case, for the record, wasn’t previously obvious to me, and makes a notch more sense out of the discussion.
For example, it makes me curious as to whether, when observing say a pre-civilization group of humans, I’d end up wanting to describe them as caring about happiness and suffering, beyond caring about various non-emotional things.
Ok, actually I can see a non-Goodharting reason to care about emotional states as such, though it’s still instrumental, so isn’t what tslarm was talking about: emotional states are blunt-force brain events, and so in a context (e.g. modern life) where the locality of emotions doesn’t fit into the locality of the demands of life, emotions are disruptive, especially suffering, or maybe more subtly any lack of happiness.
Ok, thanks for engaging. Be well. Or I guess, be happy and unsufferful.
>I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
I don’t want to poke you more and risk making you engage when you don’t want to, but just as a signpost for future people, I’ll note that I don’t recognize this as describing what happened (except of course that you felt what you say you felt, and that’s evidence that I’m wrong about what happened).
> I don’t see how it implies that I shouldn’t consider happiness to be a fundamentally, intrinsically good thing
Because it’s replacing the thing with your reaction to the thing. Does this make sense, as stated?
What I’m saying is, when we ask “what should I consider to be a fundamentally good thing”, we have nothing else to appeal to other than (the learned generalizations of) those things which our happiness comes from. Like, we’re asking for clarification about what our good-thing-detectors are aimed at. So I’m pointing out that, on the face of it, your stated fundamental values—happiness, non-suffering—are actually very very different from the pre-theoretic fundamental values—i.e. the things your good-thing-detectors detect, such as having kids, living, nuturing, connecting with people, understanding things, exploring, playing, creating, expressing, etc. Happiness is a mental event, those things are things that happen in the world or in relation to the world. Does this make sense? This feels like a fundamental point to me, and I’m not sure we’ve gotten shared clarity about this.
>I don’t see anything necessarily unreasonable about wanting everyone, including me, to experience the feeling they get when their ‘world getting better’ module is firing. (And seeing that feeling, rather than whatever triggers it, as the really important thing.)
I mean, it’s not “necessarily unreasonable”, in the sense of the orthogonality thesis of values—one could imagine an agent that coherently wants certain mental states to exist. I’m saying a weaker claim: it’s just not what you actually value. (Yes this is in some sense a rude claim, but I’m not sure what else to do, given that it’s how the world seems to me and it’s relevant and it would be more rude to pretend that’s not my current position. I don’t necessarily think you ought to engage with this as an argument, exactly. More like a hypothesis, which you could come to understand, and by understanding it you could come to recognize it as true or false of yourself; if you want to reject it before understanding it (not saying you’re doing that, just hypothetically) then I don’t see much to be gained by discussing it, though maybe it would help other people.) A reason I think it’s not actually what you value is that I suspect you wouldn’t press a button that would make everyone you love be super happy, with no suffering, and none of their material aims would be achieved (other than happiness), i.e. they wouldn’t explore or have kids, they wouldn’t play games or tell stories or make things, etc., or in general Live in any normal sense of the word; and you wouldn’t press a button like that for yourself. Would you?
Not without an extra premise somewhere.
I think this is something we disagree on. It seems to me that one of your premises is “what is good = what our good-thing detectors are aimed at”, and I don’t share that premise. Or, to the extent that I do, the good-thing detector I privilege is different from the one you privilege; I see no reason to care more about my pre-theoretic good-thing detector than the ‘good-thing detector’ that is my whole process of moral and evaluative reflection and reasoning.
That’s the thing—I’m okay with that, and I still don’t see why I ought not to be.
Of course—and the mental events are the things that I think ultimately matter.
I think this is true for some definitions of value, so to some degree our disagreement here is semantic. But it also seems that we disagree about which senses of ‘value’ or ‘values’ are important. I have moral values that are not reducible to, or straightforwardly derivable from, the values you could infer from my behaviour. Like I said, I am imperfect by my own lights—my moral beliefs and judgments are one important input to my decision-making, but they’re not the only ones and they don’t always win. (In fact I’m not always even thinking on those terms; as I presume most people do, I spend a lot of my time more or less on autopilot. The autopilot was not programmed independently from my moral values, but nor is it simply an implementation (even an imperfect heuristic one) of them.)
I’ve often thought about this sort of question, and honestly it’s hard to know which versions of wireheading/experience-machining I would or wouldn’t do. One reason is that in all realistic scenarios, I would distrust the technology and be terrified of the ways it might backfire. But also, I am well aware that I might hold back from doing what I believed I ought to do—perhaps especially with respect to other people, because I have a (healthy, in the real world) instinctive aversion to overriding other people’s autonomy even for their own good. Again though, the way I use these words, there is definitely no contradiction between the propositions “I believe state of the world X would be better”, “I believe I ought to make the world better where possible”, and “in reality I might not bring about state X even if I could”.
edit: FWIW on the concrete question you asked, IF I somehow had complete faith in the experience machine reliably working as advertised, and IF all my loved ones were enthusiastically on board with the idea, I reckon I would happily plug us all in. In reality they probably wouldn’t be, so I would have to choose between upsetting them terribly by doing it alone, or plugging them in against their wishes, and I reckon in that case I would probably end up doing neither and sticking with the status quo.
edit again: That idea of “complete faith” in the machine having no unexpected downsides is hard to fully internalise; in all realistic cases I would have at least some doubt, and that would make it easy for all the other pro-status-quo considerations to win out. But if I was truly 100% convinced that I could give myself and everyone else the best possible life, as far as all our conscious experiences were concerned? It would be really hard to rationalise a decision to pass that up. I still can’t imagine doing it to other people if they were begging me not to, but I think I would desperately try to convince them and be very upset when I inevitably failed. And if/when there was nobody left to be seriously hurt by my plugging myself in, and the option was still available to me, I think I’d do that.
>to some degree our disagreement here is semantic
The merely-lexical ambiguity is irrelevant of course. You responded to the top level post giving your reasons for not taking action re/ cryonics. So we’re just talking about whatever actually affects your behavior. I’m taking sides in your conflict, trying to talk to the part of you that wants to affect the world, against the part of you that wants to prevent you from trying to affect the world (by tricking your good-world-detectors).
>I see no reason to care more about my pre-theoretic good-thing detector than the ‘good-thing detector’ that is my whole process of moral and evaluative reflection and reasoning.
Reflection and reasoning, we can agree these things are good. I’m not attacking reason, I’m trying to implement reason by asking about the reasoning that you took to go from your pre-theoretic good-thing-detector to your post-theoretic good-thing judgements. I’m pointing out that there seems, prima facie, to be a huge divergence between these two. Do you see the apparent huge divergence? There could be a huge divergence without there being a mistake, that’s sort of the point of reason, to reach conclusions you didn’t know already. It’s just that I don’t at all see the reasoning that led you there, and it still seems to have produced wrong conclusions. So my question is, what was the reasoning that brought you to the conclusion that, despite what your pre-theoretic good-thing-detectors are aimed at (play, life, etc.), actually what’s a good thing is happiness (contra life)? So far I don’t think you’ve described that reasoning, only stated that its result is that you value happiness. (Which is fine, I haven’t asked so explicitly, and maybe it’s hard to describe.)
The ‘reasoning’ is basically just teasing out implications, checking for contradictions, that sort of thing. The ‘reflection’ includes what could probably be described as a bunch of appeals to intuition. I don’t think I can explain or justify those in a particularly interesting or useful way; but I will restate that I can only assume you’re doing the same thing at some point.
How, in broad strokes, does one tease out the implication that one cares mainly about happiness and suffering, from the pre-theoretic caring about kids, life, play, etc.?
Well I pre-theoretically care about happiness and suffering too. I hate suffering, and I hate inflicting suffering or knowing others are suffering. I like being happy, and like making others happy or knowing they’re happy. So it’s not really a process of teasing out, it’s a process of boiling down, by asking myself which things seem to matter intrinsically and which instrumentally. One way of doing this is to consider hypothetical situations, and selectively vary them and observe the difference each variation makes to my assessment of the situation. (edit: so that’s one place the ‘teasing out’ happens—I’ll work out what value set X implies about hypothetical scenarios a, b, and c, and see if I’m happy to endorse those implications. It’s probably roughly what Rawls meant by ‘reflective equilibrium’—induce principles, deduce their implications, repeat until you’re more or less satisfied.)
Basically, conscious states are the only things I have direct access to, and I ‘know’ (in a way that I couldn’t argue someone else into accepting, if they didn’t perceive it directly, but that is more obvious to me than just about anything else) that some of them are good and some of them are bad. Via emotional empathy and intellectual awareness of apparently relevant similarities, I deduce that other people and animals have a similar capacity for conscious experience, and that it’s good when they have pleasant experiences and bad when they have unpleasant ones. (edit: and these convictions are the ones I remain sure of, at the end of the boiling-down/reflective equilibrium process)
I think I’ll bow out of the discussion now—I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
>Well I pre-theoretically care about happiness and suffering too.
That you think this, and that it might be the case, for the record, wasn’t previously obvious to me, and makes a notch more sense out of the discussion.
For example, it makes me curious as to whether, when observing say a pre-civilization group of humans, I’d end up wanting to describe them as caring about happiness and suffering, beyond caring about various non-emotional things.
Ok, actually I can see a non-Goodharting reason to care about emotional states as such, though it’s still instrumental, so isn’t what tslarm was talking about: emotional states are blunt-force brain events, and so in a context (e.g. modern life) where the locality of emotions doesn’t fit into the locality of the demands of life, emotions are disruptive, especially suffering, or maybe more subtly any lack of happiness.
>I think I’ll bow out of the discussion now
Ok, thanks for engaging. Be well. Or I guess, be happy and unsufferful.
>I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
I don’t want to poke you more and risk making you engage when you don’t want to, but just as a signpost for future people, I’ll note that I don’t recognize this as describing what happened (except of course that you felt what you say you felt, and that’s evidence that I’m wrong about what happened).
Cheers. I won’t plug you into the experience machine if you don’t sign me up for cryonics :)
Deal! I’m glad we can realize gains from trade across metaphysical chasms.