You wrote that you have an “impartial observer” who shares “fundamental values” with you [...]
I feel like you’re reifying the impartial observer, and drawing some dubious conclusions from that. The impartial observer is just a metaphor—it’s me, trying to think about the world from a certain perspective. (I know you haven’t literally failed to realise that, but it’s hard for me to make sense of some of the things you’re saying, unless there’s some kind of confusion between us on that point.)
All of my varied and sometimes conflicting feelings, beliefs, instincts, desires etc. are equally real. Some of them I endorse on reflection, others I don’t; some of them I see as pointing at something fundamentally important, others I don’t.
the “impartial observer” pretends that your “instincts” are aimed merely at feelings
I don’t think I’ve ever suggested that my instincts are “aimed merely at feelings”—if they’re ‘aimed’ at anything other than their direct targets, it probably makes more sense to say they’re aimed at the propagation of my genes, which is presumably why they’re part of me in the first place. And on reflection, I don’t see the propagation of my genes as the supreme good to be aimed at above all else, so it’s not surprising that I’m sometimes going to disagree with my instincts.
as if the point of cryonics is [...]
“the point of cryonics” can be whatever someone signing up wants it to be! I get that for some people, death is the ultimate bad thing, and I have some sympathy with them (you?) on that. I don’t like death, I’m afraid of it, etc. I haven’t talked myself into thinking that I’m fine with it. But, on reflection, and like I said a few comments up, when I think about personal identity and what it actually means for a specific person to persist through time, I’m not convinced that it is fundamentally important whether an experience-moment belongs to one particular entity or another—or whether a set of experience-moments belongs to one entity or a group of others. (And that’s what’s fundamentally important to me—conscious experience. That’s what I think matters in the world; the quality of conscious experiences is what I think makes the world good or bad, better or worse.)
None of this means that death doesn’t suck. But to me, it primarily sucks because of all the pain it causes. If we all somehow really got used to it, to the point that we could meet it without fear or horror, and could farewell each other without overwhelming grief, I would see that as a great improvement. A hundred generations living a hundred years each doesn’t seem intrinsically worse to me than a single quasi-immortal generation living for 10,000 years. Right now I’d take the second option, because yeah, death sucks. But (setting aside the physical decay that in fact tends to precede and accompany it), in my opinion the degree to which it sucks is contingent on our psychology.
I perceive this as a self-destructive conflict, and I wanted to explore and make precise what you meant by “the values that my ‘impartial observer’ shares with me”, because that seems like part of the conflict.
[...]
What I’m saying is, is that happiness is what your brain does when its “is the world getting better” detector is returning “hell yeah!”. So what you’re saying is a vicious circle. (It’s fine though, because your “is the world getting better” detector should still be mostly intact. You just have to decide to listen to it, rather than pretending that you want to trick it.)
I appreciate your directness, but I don’t really appreciate the ratio of confident, prescriptive psycholanalysis to actual argument. You’re asserting a lot, but giving me few reasons to take your assertions seriously enough to gain anything from them. (I don’t mean this conversation should be about providing me with some gain—but I don’t get the sense you are open to having your own mind changed on any of the topics we’re discussing; your purpose seems to be to fix me in some way.) I genuinely disagree with you on the fundamental importance of happiness. I might be wrong, but I’m not simply confused—at least not in a way that you can dispel simply by asking questions and asserting your own conflicting beliefs.
Sorry if that comes across in an insulting way; I do appreciate your attempts to work through these issues with me. But this has felt like a fairly one-sided dialogue, in the sense that you seem to think exactly one of us has a lot to learn. Which isn’t necessarily a problem, and perhaps it’s the attitude most of us take into most such discussions—but if you want to teach me, I need you to do more to positively support your own convictions, rather than just confidently assert them and try to socratic-dialogue your way to a diagnosis of what’s wrong with mine.
>the ratio of confident, prescriptive psycholanalysis to actual argument
I appreciate you engaging generally, and specifically mentioning these process points. The reason I’m stating things without caveats etc. is that it feels like there’s a huge gulf between us, and so it seems like the only way that would possibly get anywhere is to make clear conjectures and describe them as bluntly as possible, so that key points of disagreement can come to the front. I want to provide arguments for the propositions, but I want to direct efforts to do that towards where it matters most, so I’m hoping to home in on key points. I’m not hoping to dispel your confusions just by stating some position, I’m hoping to clarify your position in contrast to points I’m stating. My psychoanalyses are rude in some sense, and I want to hold them very lightly; I do at least put uncertainty-words on them (e.g. “I perceive this as....”) to hopefully indicate that I’m describing not something that I’m permanently confident of, but something that’s my current best guess given the data.
>You’re asserting a lot, but giving me few reasons to take your assertions seriously enough to gain anything from them.
> I genuinely disagree with you on the fundamental importance of happiness
I described a view of what happiness is, and the implication of that view that happiness isn’t a terminal value. I don’t think you responded to that, except to say that you disagreed with the implication, and that you have a different definition of value, which is “making the world better”. Maybe it would help if you expanded more on your disagreement? Did my argument make sense, or is there something to clarify?
The reason this seems important to me is that upthread you said:
>Not a literal third party, but I do try to think about ethical questions from the perspective of a hypothetical impartial observer. (With my fundamental values, though; so if it’s anyone, it’s basically me behind a veil of ignorance.)
Basically I think our disagreement is over whether the impartial judgements actually share your values. I’ve been trying to point out how it looks a lot more like the impartial judgements are using a different criterion for what constitutes a better world than the criterion implied by your desires. E.g. on the one hand you’re afraid of your loved ones dying, which I take to imply that the world is better if your loved ones don’t die. On the other hand some of your other statements sound like the only problem is the fear and unhappiness around death. So basically my question is, how do you know that the impartial conclusions are right, given that you still have fear of your loved ones dying?
Another point that might matter, is that I don’t think it makes sense to talk about “moments of conscious experience” as isolated from the person who’s experiencing them. Which opens the door for death mattering—if we care about conscious experience, and conscious experience implies identity across time, we might are about those identities continuing. The reason I think it doesn’t make sense to talk of isolated experience is that experience seems like it always involves beliefs and significance, not mere valence or data.
Re your first paragraph—fair enough, and thanks for clarifying. Something about this approach has rubbed me the wrong way, but I am stressed IRL at the moment and that is probably making me pricklier than I would otherwise be. (By the way, so that I don’t waste your time, I should say that I might stop responding at some point before anything is resolved. If so, please don’t interpret that as an unfriendly or insulting response—it will just be the result of me realising that I’m finding the conversation stressful, and/or spending too much time on it, and should probably leave it alone.)
I described a view of what happiness is, and the implication of that view that happiness isn’t a terminal value.
I think you’re referring to the following lines—let me know if I’m missed others.
Happiness is something that sometimes happens when you and the world are on the way towards good things.
Depending on exactly how you mean this, I think it might beg the question, or at least be missing a definition of ‘good things’ and a justification for why that excludes happiness. Or, if you mean ‘good things’ loosely enough that I might agree with the quoted sentence, I don’t think it bears on the question of whether happiness is/ought to be a terminal value.
The quality of conscious experience you’re talking about is a derivative aspect, like a component or a side effect, of a process of your mind learning to understand and affect the world to get what it wants.
I would quibble with this, if “your mind learning to understand and affect the world to get what it wants” is intended as an exhaustive description of how happiness arises—but more to the point, I don’t see how it implies that I shouldn’t consider happiness to be a fundamentally, intrinsically good thing.
happiness is what your brain does when its “is the world getting better” detector is returning “hell yeah!”
Again, even if this is true, I don’t think it bears on the fundamental point. I don’t see anything necessarily unreasonable about wanting everyone, including me, to experience the feeling they get when their ‘world getting better’ module is firing. (And seeing that feeling, rather than whatever triggers it, as the really important thing.)
I think you see a conflict between one (unconscious) part of my mind saying ‘the world is getting better [in some way that isn’t entirely about me or other people feeling happier or suffering less], have some happiness as a reward!’ and the part that writes and talks and (thinks that it) reasons saying ‘increasing happiness and reducing suffering is what it means for the world to get better!‘. But I just don’t have a problem with that conflict, or at least I don’t see how it implies that the ‘happiness is good’ side is wrong. (Likewise for the conflict between my ‘wanting’ one thing in a moral sense and ‘wanting’ other, sometimes conflicting things in other senses.)
Basically I think our disagreement is over whether the impartial judgements actually share your values. I’ve been trying to point out how it looks a lot more like the impartial judgements are using a different criterion for what constitutes a better world than the criterion implied by your desires. E.g. on the one hand you’re afraid of your loved ones dying, which I take to imply that the world is better if your loved ones don’t die. On the other hand some of your other statements sound like the only problem is the fear and unhappiness around death. So basically my question is, how do you know that the impartial conclusions are right, given that you still have fear of your loved ones dying?
From a certain perspective I’m not confident that they’re right, but I don’t see any good reason for you to be confident that they’re wrong. I am confident that they’re right in the sense that my ground level, endorsed-upon-careful-reflection moral/evaluative convictions just seem like fundamental truths to me. I realise there’s absolutely no reason for anyone else to find that convincing—but I think everyone who has moral or axiological opinions is making the same leap of faith at some point, or else fudging their way around it by conflating the normative and the merely descriptive. When you examine your convictions and keep asking ‘why’, at some point you’re either going to hit bottom or find yourself using circular reasoning. (Or I guess there could be some kind of infinite regress, but I’m not sure what that would look like and I don’t think it would be an improvement over the other options.)
I know that’s probably not very satisfying, but that’s basically why I said above that I can’t see us changing each other’s mind at this fundamental level. I’ve got my ground-level convictions, you’ve got yours, we’ve both thought about them pretty hard, and unless one of us can either prove that the other is being inconsistent or come up with a novel and surprisingly powerful appeal to intuition, I’m not sure what we could say to each other to shift them.
Another point that might matter, is that I don’t think it makes sense to talk about “moments of conscious experience” as isolated from the person who’s experiencing them. Which opens the door for death mattering—if we care about conscious experience, and conscious experience implies identity across time, we might are about those identities continuing. The reason I think it doesn’t make sense to talk of isolated experience is that experience seems like it always involves beliefs and significance, not mere valence or data.
I should have gone to bed a while ago and this is a big topic, so I won’t try to respond now, but I agree that this sort of disagreement is probably important. I do think I’m more likely to change my views on personal identity, moments of experience etc. than on most of what we’ve been discussing, so it could be fruitful to elaborate on your position if you feel like it.
(But I should make it clear that I see consciousness—in the ‘hard problem’, qualia, David Chalmers sense—as real and irreducible (and, as is probably obvious by now, supremely important). That doesn’t mean I think worrying about the hard problem is productive—as best I can tell there’s no possible argument or set of empirical data that would solve it—but I find every claim to have dissolved the problem, every attempt to define qualia out of existence, etc., excruciatingly unconvincing. So if your position on personal identity etc. conflicts with mine on those points, it would probably be a waste of time to elaborate on it with the intention of convincing me—though of course it could still serve to clarify a point of disagreement.)
>I think everyone who has moral or axiological opinions is making the same leap of faith at some point, or else fudging their way around it by conflating the normative and the merely descriptive
This may be right, but we can still notice differences, especially huge ones, and trace back their origins. It actually seems pretty surprising if you and I have wildly, metaphysically disparate values, and at least interesting.
To this end I think it would help if you laid out your own ground-level values, and explained to whatever extent is possible why you hold them (and perhaps in what sense you think they are correct).
I mean, at risk of seeming flippant, I just want to say “basically all the values your ‘real person’ holds”?
Like, it’s just all that stuff we both think is good. Play, life, children, exploration; empowering others to get what they want, and freeing them from pointless suffering; understanding, creating, expressing, communicating, …
I’m just… not doing the last step where I abstract that into a mental state, and then replace it with that mental state. The “correctness” comes from Reason, it’s just that the Reason is applied to more greatly empower me to make the world better, to make tradeoffs and prioritizations, to clarify things, to propagate logical implications… For example, say I have an urge to harm someone. I generally decide to nevertheless not harm them, because I disagree with the intuition. Maybe it was put there by evolution fighting some game I don’t want to fight, maybe it was a traumatic reaction I had to something years ago; anyway, I currently believe the world will be better if I don’t do that. If I harm someone, they’ll be less empowered to get what they want; I’ll less live among people who are getting what they want, and sharing with me; etc.
> I don’t see how it implies that I shouldn’t consider happiness to be a fundamentally, intrinsically good thing
Because it’s replacing the thing with your reaction to the thing. Does this make sense, as stated?
What I’m saying is, when we ask “what should I consider to be a fundamentally good thing”, we have nothing else to appeal to other than (the learned generalizations of) those things which our happiness comes from. Like, we’re asking for clarification about what our good-thing-detectors are aimed at. So I’m pointing out that, on the face of it, your stated fundamental values—happiness, non-suffering—are actually very very different from the pre-theoretic fundamental values—i.e. the things your good-thing-detectors detect, such as having kids, living, nuturing, connecting with people, understanding things, exploring, playing, creating, expressing, etc. Happiness is a mental event, those things are things that happen in the world or in relation to the world. Does this make sense? This feels like a fundamental point to me, and I’m not sure we’ve gotten shared clarity about this.
>I don’t see anything necessarily unreasonable about wanting everyone, including me, to experience the feeling they get when their ‘world getting better’ module is firing. (And seeing that feeling, rather than whatever triggers it, as the really important thing.)
I mean, it’s not “necessarily unreasonable”, in the sense of the orthogonality thesis of values—one could imagine an agent that coherently wants certain mental states to exist. I’m saying a weaker claim: it’s just not what you actually value. (Yes this is in some sense a rude claim, but I’m not sure what else to do, given that it’s how the world seems to me and it’s relevant and it would be more rude to pretend that’s not my current position. I don’t necessarily think you ought to engage with this as an argument, exactly. More like a hypothesis, which you could come to understand, and by understanding it you could come to recognize it as true or false of yourself; if you want to reject it before understanding it (not saying you’re doing that, just hypothetically) then I don’t see much to be gained by discussing it, though maybe it would help other people.) A reason I think it’s not actually what you value is that I suspect you wouldn’t press a button that would make everyone you love be super happy, with no suffering, and none of their material aims would be achieved (other than happiness), i.e. they wouldn’t explore or have kids, they wouldn’t play games or tell stories or make things, etc., or in general Live in any normal sense of the word; and you wouldn’t press a button like that for yourself. Would you?
Because it’s replacing the thing with your reaction to the thing. Does this make sense, as stated?
Not without an extra premise somewhere.
we’re asking for clarification about what our good-thing-detectors are aimed at
I think this is something we disagree on. It seems to me that one of your premises is “what is good = what our good-thing detectors are aimed at”, and I don’t share that premise. Or, to the extent that I do, the good-thing detector I privilege is different from the one you privilege; I see no reason to care more about my pre-theoretic good-thing detector than the ‘good-thing detector’ that is my whole process of moral and evaluative reflection and reasoning.
your stated fundamental values—happiness, non-suffering—are actually very very different from the pre-theoretic fundamental values—i.e. the things your good-thing-detectors detect, such as having kids, living, nuturing, connecting with people, understanding things, exploring, playing, creating, expressing, etc.
That’s the thing—I’m okay with that, and I still don’t see why I ought not to be.
Happiness is a mental event, those things are things that happen in the world or in relation to the world. Does this make sense?
Of course—and the mental events are the things that I think ultimately matter.
I’m saying a weaker claim: it’s just not what you actually value.
I think this is true for some definitions of value, so to some degree our disagreement here is semantic. But it also seems that we disagree about which senses of ‘value’ or ‘values’ are important. I have moral values that are not reducible to, or straightforwardly derivable from, the values you could infer from my behaviour. Like I said, I am imperfect by my own lights—my moral beliefs and judgments are one important input to my decision-making, but they’re not the only ones and they don’t always win. (In fact I’m not always even thinking on those terms; as I presume most people do, I spend a lot of my time more or less on autopilot. The autopilot was not programmed independently from my moral values, but nor is it simply an implementation (even an imperfect heuristic one) of them.)
A reason I think it’s not actually what you value is that I suspect you wouldn’t press a button that would make everyone you love be super happy, with no suffering, and none of their material aims would be achieved (other than happiness), i.e. they wouldn’t explore or have kids, they wouldn’t play games or tell stories or make things, etc., or in general Live in any normal sense of the word; and you wouldn’t press a button like that for yourself. Would you?
I’ve often thought about this sort of question, and honestly it’s hard to know which versions of wireheading/experience-machining I would or wouldn’t do. One reason is that in all realistic scenarios, I would distrust the technology and be terrified of the ways it might backfire. But also, I am well aware that I might hold back from doing what I believed I ought to do—perhaps especially with respect to other people, because I have a (healthy, in the real world) instinctive aversion to overriding other people’s autonomy even for their own good. Again though, the way I use these words, there is definitely no contradiction between the propositions “I believe state of the world X would be better”, “I believe I ought to make the world better where possible”, and “in reality I might not bring about state X even if I could”.
edit: FWIW on the concrete question you asked, IF I somehow had complete faith in the experience machine reliably working as advertised, and IF all my loved ones were enthusiastically on board with the idea, I reckon I would happily plug us all in. In reality they probably wouldn’t be, so I would have to choose between upsetting them terribly by doing it alone, or plugging them in against their wishes, and I reckon in that case I would probably end up doing neither and sticking with the status quo.
edit again: That idea of “complete faith” in the machine having no unexpected downsides is hard to fully internalise; in all realistic cases I would have at least some doubt, and that would make it easy for all the other pro-status-quo considerations to win out. But if I was truly 100% convinced that I could give myself and everyone else the best possible life, as far as all our conscious experiences were concerned? It would be really hard to rationalise a decision to pass that up. I still can’t imagine doing it to other people if they were begging me not to, but I think I would desperately try to convince them and be very upset when I inevitably failed. And if/when there was nobody left to be seriously hurt by my plugging myself in, and the option was still available to me, I think I’d do that.
The merely-lexical ambiguity is irrelevant of course. You responded to the top level post giving your reasons for not taking action re/ cryonics. So we’re just talking about whatever actually affects your behavior. I’m taking sides in your conflict, trying to talk to the part of you that wants to affect the world, against the part of you that wants to prevent you from trying to affect the world (by tricking your good-world-detectors).
>I see no reason to care more about my pre-theoretic good-thing detector than the ‘good-thing detector’ that is my whole process of moral and evaluative reflection and reasoning.
Reflection and reasoning, we can agree these things are good. I’m not attacking reason, I’m trying to implement reason by asking about the reasoning that you took to go from your pre-theoretic good-thing-detector to your post-theoretic good-thing judgements. I’m pointing out that there seems, prima facie, to be a huge divergence between these two. Do you see the apparent huge divergence? There could be a huge divergence without there being a mistake, that’s sort of the point of reason, to reach conclusions you didn’t know already. It’s just that I don’t at all see the reasoning that led you there, and it still seems to have produced wrong conclusions. So my question is, what was the reasoning that brought you to the conclusion that, despite what your pre-theoretic good-thing-detectors are aimed at (play, life, etc.), actually what’s a good thing is happiness (contra life)? So far I don’t think you’ve described that reasoning, only stated that its result is that you value happiness. (Which is fine, I haven’t asked so explicitly, and maybe it’s hard to describe.)
The ‘reasoning’ is basically just teasing out implications, checking for contradictions, that sort of thing. The ‘reflection’ includes what could probably be described as a bunch of appeals to intuition. I don’t think I can explain or justify those in a particularly interesting or useful way; but I will restate that I can only assume you’re doing the same thing at some point.
How, in broad strokes, does one tease out the implication that one cares mainly about happiness and suffering, from the pre-theoretic caring about kids, life, play, etc.?
Well I pre-theoretically care about happiness and suffering too. I hate suffering, and I hate inflicting suffering or knowing others are suffering. I like being happy, and like making others happy or knowing they’re happy. So it’s not really a process of teasing out, it’s a process of boiling down, by asking myself which things seem to matter intrinsically and which instrumentally. One way of doing this is to consider hypothetical situations, and selectively vary them and observe the difference each variation makes to my assessment of the situation. (edit: so that’s one place the ‘teasing out’ happens—I’ll work out what value set X implies about hypothetical scenarios a, b, and c, and see if I’m happy to endorse those implications. It’s probably roughly what Rawls meant by ‘reflective equilibrium’—induce principles, deduce their implications, repeat until you’re more or less satisfied.)
Basically, conscious states are the only things I have direct access to, and I ‘know’ (in a way that I couldn’t argue someone else into accepting, if they didn’t perceive it directly, but that is more obvious to me than just about anything else) that some of them are good and some of them are bad. Via emotional empathy and intellectual awareness of apparently relevant similarities, I deduce that other people and animals have a similar capacity for conscious experience, and that it’s good when they have pleasant experiences and bad when they have unpleasant ones. (edit: and these convictions are the ones I remain sure of, at the end of the boiling-down/reflective equilibrium process)
I think I’ll bow out of the discussion now—I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
>Well I pre-theoretically care about happiness and suffering too.
That you think this, and that it might be the case, for the record, wasn’t previously obvious to me, and makes a notch more sense out of the discussion.
For example, it makes me curious as to whether, when observing say a pre-civilization group of humans, I’d end up wanting to describe them as caring about happiness and suffering, beyond caring about various non-emotional things.
Ok, actually I can see a non-Goodharting reason to care about emotional states as such, though it’s still instrumental, so isn’t what tslarm was talking about: emotional states are blunt-force brain events, and so in a context (e.g. modern life) where the locality of emotions doesn’t fit into the locality of the demands of life, emotions are disruptive, especially suffering, or maybe more subtly any lack of happiness.
Ok, thanks for engaging. Be well. Or I guess, be happy and unsufferful.
>I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
I don’t want to poke you more and risk making you engage when you don’t want to, but just as a signpost for future people, I’ll note that I don’t recognize this as describing what happened (except of course that you felt what you say you felt, and that’s evidence that I’m wrong about what happened).
I feel like you’re reifying the impartial observer, and drawing some dubious conclusions from that. The impartial observer is just a metaphor—it’s me, trying to think about the world from a certain perspective. (I know you haven’t literally failed to realise that, but it’s hard for me to make sense of some of the things you’re saying, unless there’s some kind of confusion between us on that point.)
All of my varied and sometimes conflicting feelings, beliefs, instincts, desires etc. are equally real. Some of them I endorse on reflection, others I don’t; some of them I see as pointing at something fundamentally important, others I don’t.
I don’t think I’ve ever suggested that my instincts are “aimed merely at feelings”—if they’re ‘aimed’ at anything other than their direct targets, it probably makes more sense to say they’re aimed at the propagation of my genes, which is presumably why they’re part of me in the first place. And on reflection, I don’t see the propagation of my genes as the supreme good to be aimed at above all else, so it’s not surprising that I’m sometimes going to disagree with my instincts.
“the point of cryonics” can be whatever someone signing up wants it to be! I get that for some people, death is the ultimate bad thing, and I have some sympathy with them (you?) on that. I don’t like death, I’m afraid of it, etc. I haven’t talked myself into thinking that I’m fine with it. But, on reflection, and like I said a few comments up, when I think about personal identity and what it actually means for a specific person to persist through time, I’m not convinced that it is fundamentally important whether an experience-moment belongs to one particular entity or another—or whether a set of experience-moments belongs to one entity or a group of others. (And that’s what’s fundamentally important to me—conscious experience. That’s what I think matters in the world; the quality of conscious experiences is what I think makes the world good or bad, better or worse.)
None of this means that death doesn’t suck. But to me, it primarily sucks because of all the pain it causes. If we all somehow really got used to it, to the point that we could meet it without fear or horror, and could farewell each other without overwhelming grief, I would see that as a great improvement. A hundred generations living a hundred years each doesn’t seem intrinsically worse to me than a single quasi-immortal generation living for 10,000 years. Right now I’d take the second option, because yeah, death sucks. But (setting aside the physical decay that in fact tends to precede and accompany it), in my opinion the degree to which it sucks is contingent on our psychology.
I appreciate your directness, but I don’t really appreciate the ratio of confident, prescriptive psycholanalysis to actual argument. You’re asserting a lot, but giving me few reasons to take your assertions seriously enough to gain anything from them. (I don’t mean this conversation should be about providing me with some gain—but I don’t get the sense you are open to having your own mind changed on any of the topics we’re discussing; your purpose seems to be to fix me in some way.) I genuinely disagree with you on the fundamental importance of happiness. I might be wrong, but I’m not simply confused—at least not in a way that you can dispel simply by asking questions and asserting your own conflicting beliefs.
Sorry if that comes across in an insulting way; I do appreciate your attempts to work through these issues with me. But this has felt like a fairly one-sided dialogue, in the sense that you seem to think exactly one of us has a lot to learn. Which isn’t necessarily a problem, and perhaps it’s the attitude most of us take into most such discussions—but if you want to teach me, I need you to do more to positively support your own convictions, rather than just confidently assert them and try to socratic-dialogue your way to a diagnosis of what’s wrong with mine.
>the ratio of confident, prescriptive psycholanalysis to actual argument
I appreciate you engaging generally, and specifically mentioning these process points. The reason I’m stating things without caveats etc. is that it feels like there’s a huge gulf between us, and so it seems like the only way that would possibly get anywhere is to make clear conjectures and describe them as bluntly as possible, so that key points of disagreement can come to the front. I want to provide arguments for the propositions, but I want to direct efforts to do that towards where it matters most, so I’m hoping to home in on key points. I’m not hoping to dispel your confusions just by stating some position, I’m hoping to clarify your position in contrast to points I’m stating. My psychoanalyses are rude in some sense, and I want to hold them very lightly; I do at least put uncertainty-words on them (e.g. “I perceive this as....”) to hopefully indicate that I’m describing not something that I’m permanently confident of, but something that’s my current best guess given the data.
>You’re asserting a lot, but giving me few reasons to take your assertions seriously enough to gain anything from them.
> I genuinely disagree with you on the fundamental importance of happiness
I described a view of what happiness is, and the implication of that view that happiness isn’t a terminal value. I don’t think you responded to that, except to say that you disagreed with the implication, and that you have a different definition of value, which is “making the world better”. Maybe it would help if you expanded more on your disagreement? Did my argument make sense, or is there something to clarify?
The reason this seems important to me is that upthread you said:
>Not a literal third party, but I do try to think about ethical questions from the perspective of a hypothetical impartial observer. (With my fundamental values, though; so if it’s anyone, it’s basically me behind a veil of ignorance.)
Basically I think our disagreement is over whether the impartial judgements actually share your values. I’ve been trying to point out how it looks a lot more like the impartial judgements are using a different criterion for what constitutes a better world than the criterion implied by your desires. E.g. on the one hand you’re afraid of your loved ones dying, which I take to imply that the world is better if your loved ones don’t die. On the other hand some of your other statements sound like the only problem is the fear and unhappiness around death. So basically my question is, how do you know that the impartial conclusions are right, given that you still have fear of your loved ones dying?
Another point that might matter, is that I don’t think it makes sense to talk about “moments of conscious experience” as isolated from the person who’s experiencing them. Which opens the door for death mattering—if we care about conscious experience, and conscious experience implies identity across time, we might are about those identities continuing. The reason I think it doesn’t make sense to talk of isolated experience is that experience seems like it always involves beliefs and significance, not mere valence or data.
Re your first paragraph—fair enough, and thanks for clarifying. Something about this approach has rubbed me the wrong way, but I am stressed IRL at the moment and that is probably making me pricklier than I would otherwise be. (By the way, so that I don’t waste your time, I should say that I might stop responding at some point before anything is resolved. If so, please don’t interpret that as an unfriendly or insulting response—it will just be the result of me realising that I’m finding the conversation stressful, and/or spending too much time on it, and should probably leave it alone.)
I think you’re referring to the following lines—let me know if I’m missed others.
Depending on exactly how you mean this, I think it might beg the question, or at least be missing a definition of ‘good things’ and a justification for why that excludes happiness. Or, if you mean ‘good things’ loosely enough that I might agree with the quoted sentence, I don’t think it bears on the question of whether happiness is/ought to be a terminal value.
I would quibble with this, if “your mind learning to understand and affect the world to get what it wants” is intended as an exhaustive description of how happiness arises—but more to the point, I don’t see how it implies that I shouldn’t consider happiness to be a fundamentally, intrinsically good thing.
Again, even if this is true, I don’t think it bears on the fundamental point. I don’t see anything necessarily unreasonable about wanting everyone, including me, to experience the feeling they get when their ‘world getting better’ module is firing. (And seeing that feeling, rather than whatever triggers it, as the really important thing.)
I think you see a conflict between one (unconscious) part of my mind saying ‘the world is getting better [in some way that isn’t entirely about me or other people feeling happier or suffering less], have some happiness as a reward!’ and the part that writes and talks and (thinks that it) reasons saying ‘increasing happiness and reducing suffering is what it means for the world to get better!‘. But I just don’t have a problem with that conflict, or at least I don’t see how it implies that the ‘happiness is good’ side is wrong. (Likewise for the conflict between my ‘wanting’ one thing in a moral sense and ‘wanting’ other, sometimes conflicting things in other senses.)
From a certain perspective I’m not confident that they’re right, but I don’t see any good reason for you to be confident that they’re wrong. I am confident that they’re right in the sense that my ground level, endorsed-upon-careful-reflection moral/evaluative convictions just seem like fundamental truths to me. I realise there’s absolutely no reason for anyone else to find that convincing—but I think everyone who has moral or axiological opinions is making the same leap of faith at some point, or else fudging their way around it by conflating the normative and the merely descriptive. When you examine your convictions and keep asking ‘why’, at some point you’re either going to hit bottom or find yourself using circular reasoning. (Or I guess there could be some kind of infinite regress, but I’m not sure what that would look like and I don’t think it would be an improvement over the other options.)
I know that’s probably not very satisfying, but that’s basically why I said above that I can’t see us changing each other’s mind at this fundamental level. I’ve got my ground-level convictions, you’ve got yours, we’ve both thought about them pretty hard, and unless one of us can either prove that the other is being inconsistent or come up with a novel and surprisingly powerful appeal to intuition, I’m not sure what we could say to each other to shift them.
I should have gone to bed a while ago and this is a big topic, so I won’t try to respond now, but I agree that this sort of disagreement is probably important. I do think I’m more likely to change my views on personal identity, moments of experience etc. than on most of what we’ve been discussing, so it could be fruitful to elaborate on your position if you feel like it.
(But I should make it clear that I see consciousness—in the ‘hard problem’, qualia, David Chalmers sense—as real and irreducible (and, as is probably obvious by now, supremely important). That doesn’t mean I think worrying about the hard problem is productive—as best I can tell there’s no possible argument or set of empirical data that would solve it—but I find every claim to have dissolved the problem, every attempt to define qualia out of existence, etc., excruciatingly unconvincing. So if your position on personal identity etc. conflicts with mine on those points, it would probably be a waste of time to elaborate on it with the intention of convincing me—though of course it could still serve to clarify a point of disagreement.)
>I think everyone who has moral or axiological opinions is making the same leap of faith at some point, or else fudging their way around it by conflating the normative and the merely descriptive
This may be right, but we can still notice differences, especially huge ones, and trace back their origins. It actually seems pretty surprising if you and I have wildly, metaphysically disparate values, and at least interesting.
To this end I think it would help if you laid out your own ground-level values, and explained to whatever extent is possible why you hold them (and perhaps in what sense you think they are correct).
I mean, at risk of seeming flippant, I just want to say “basically all the values your ‘real person’ holds”?
Like, it’s just all that stuff we both think is good. Play, life, children, exploration; empowering others to get what they want, and freeing them from pointless suffering; understanding, creating, expressing, communicating, …
I’m just… not doing the last step where I abstract that into a mental state, and then replace it with that mental state. The “correctness” comes from Reason, it’s just that the Reason is applied to more greatly empower me to make the world better, to make tradeoffs and prioritizations, to clarify things, to propagate logical implications… For example, say I have an urge to harm someone. I generally decide to nevertheless not harm them, because I disagree with the intuition. Maybe it was put there by evolution fighting some game I don’t want to fight, maybe it was a traumatic reaction I had to something years ago; anyway, I currently believe the world will be better if I don’t do that. If I harm someone, they’ll be less empowered to get what they want; I’ll less live among people who are getting what they want, and sharing with me; etc.
> I don’t see how it implies that I shouldn’t consider happiness to be a fundamentally, intrinsically good thing
Because it’s replacing the thing with your reaction to the thing. Does this make sense, as stated?
What I’m saying is, when we ask “what should I consider to be a fundamentally good thing”, we have nothing else to appeal to other than (the learned generalizations of) those things which our happiness comes from. Like, we’re asking for clarification about what our good-thing-detectors are aimed at. So I’m pointing out that, on the face of it, your stated fundamental values—happiness, non-suffering—are actually very very different from the pre-theoretic fundamental values—i.e. the things your good-thing-detectors detect, such as having kids, living, nuturing, connecting with people, understanding things, exploring, playing, creating, expressing, etc. Happiness is a mental event, those things are things that happen in the world or in relation to the world. Does this make sense? This feels like a fundamental point to me, and I’m not sure we’ve gotten shared clarity about this.
>I don’t see anything necessarily unreasonable about wanting everyone, including me, to experience the feeling they get when their ‘world getting better’ module is firing. (And seeing that feeling, rather than whatever triggers it, as the really important thing.)
I mean, it’s not “necessarily unreasonable”, in the sense of the orthogonality thesis of values—one could imagine an agent that coherently wants certain mental states to exist. I’m saying a weaker claim: it’s just not what you actually value. (Yes this is in some sense a rude claim, but I’m not sure what else to do, given that it’s how the world seems to me and it’s relevant and it would be more rude to pretend that’s not my current position. I don’t necessarily think you ought to engage with this as an argument, exactly. More like a hypothesis, which you could come to understand, and by understanding it you could come to recognize it as true or false of yourself; if you want to reject it before understanding it (not saying you’re doing that, just hypothetically) then I don’t see much to be gained by discussing it, though maybe it would help other people.) A reason I think it’s not actually what you value is that I suspect you wouldn’t press a button that would make everyone you love be super happy, with no suffering, and none of their material aims would be achieved (other than happiness), i.e. they wouldn’t explore or have kids, they wouldn’t play games or tell stories or make things, etc., or in general Live in any normal sense of the word; and you wouldn’t press a button like that for yourself. Would you?
Not without an extra premise somewhere.
I think this is something we disagree on. It seems to me that one of your premises is “what is good = what our good-thing detectors are aimed at”, and I don’t share that premise. Or, to the extent that I do, the good-thing detector I privilege is different from the one you privilege; I see no reason to care more about my pre-theoretic good-thing detector than the ‘good-thing detector’ that is my whole process of moral and evaluative reflection and reasoning.
That’s the thing—I’m okay with that, and I still don’t see why I ought not to be.
Of course—and the mental events are the things that I think ultimately matter.
I think this is true for some definitions of value, so to some degree our disagreement here is semantic. But it also seems that we disagree about which senses of ‘value’ or ‘values’ are important. I have moral values that are not reducible to, or straightforwardly derivable from, the values you could infer from my behaviour. Like I said, I am imperfect by my own lights—my moral beliefs and judgments are one important input to my decision-making, but they’re not the only ones and they don’t always win. (In fact I’m not always even thinking on those terms; as I presume most people do, I spend a lot of my time more or less on autopilot. The autopilot was not programmed independently from my moral values, but nor is it simply an implementation (even an imperfect heuristic one) of them.)
I’ve often thought about this sort of question, and honestly it’s hard to know which versions of wireheading/experience-machining I would or wouldn’t do. One reason is that in all realistic scenarios, I would distrust the technology and be terrified of the ways it might backfire. But also, I am well aware that I might hold back from doing what I believed I ought to do—perhaps especially with respect to other people, because I have a (healthy, in the real world) instinctive aversion to overriding other people’s autonomy even for their own good. Again though, the way I use these words, there is definitely no contradiction between the propositions “I believe state of the world X would be better”, “I believe I ought to make the world better where possible”, and “in reality I might not bring about state X even if I could”.
edit: FWIW on the concrete question you asked, IF I somehow had complete faith in the experience machine reliably working as advertised, and IF all my loved ones were enthusiastically on board with the idea, I reckon I would happily plug us all in. In reality they probably wouldn’t be, so I would have to choose between upsetting them terribly by doing it alone, or plugging them in against their wishes, and I reckon in that case I would probably end up doing neither and sticking with the status quo.
edit again: That idea of “complete faith” in the machine having no unexpected downsides is hard to fully internalise; in all realistic cases I would have at least some doubt, and that would make it easy for all the other pro-status-quo considerations to win out. But if I was truly 100% convinced that I could give myself and everyone else the best possible life, as far as all our conscious experiences were concerned? It would be really hard to rationalise a decision to pass that up. I still can’t imagine doing it to other people if they were begging me not to, but I think I would desperately try to convince them and be very upset when I inevitably failed. And if/when there was nobody left to be seriously hurt by my plugging myself in, and the option was still available to me, I think I’d do that.
>to some degree our disagreement here is semantic
The merely-lexical ambiguity is irrelevant of course. You responded to the top level post giving your reasons for not taking action re/ cryonics. So we’re just talking about whatever actually affects your behavior. I’m taking sides in your conflict, trying to talk to the part of you that wants to affect the world, against the part of you that wants to prevent you from trying to affect the world (by tricking your good-world-detectors).
>I see no reason to care more about my pre-theoretic good-thing detector than the ‘good-thing detector’ that is my whole process of moral and evaluative reflection and reasoning.
Reflection and reasoning, we can agree these things are good. I’m not attacking reason, I’m trying to implement reason by asking about the reasoning that you took to go from your pre-theoretic good-thing-detector to your post-theoretic good-thing judgements. I’m pointing out that there seems, prima facie, to be a huge divergence between these two. Do you see the apparent huge divergence? There could be a huge divergence without there being a mistake, that’s sort of the point of reason, to reach conclusions you didn’t know already. It’s just that I don’t at all see the reasoning that led you there, and it still seems to have produced wrong conclusions. So my question is, what was the reasoning that brought you to the conclusion that, despite what your pre-theoretic good-thing-detectors are aimed at (play, life, etc.), actually what’s a good thing is happiness (contra life)? So far I don’t think you’ve described that reasoning, only stated that its result is that you value happiness. (Which is fine, I haven’t asked so explicitly, and maybe it’s hard to describe.)
The ‘reasoning’ is basically just teasing out implications, checking for contradictions, that sort of thing. The ‘reflection’ includes what could probably be described as a bunch of appeals to intuition. I don’t think I can explain or justify those in a particularly interesting or useful way; but I will restate that I can only assume you’re doing the same thing at some point.
How, in broad strokes, does one tease out the implication that one cares mainly about happiness and suffering, from the pre-theoretic caring about kids, life, play, etc.?
Well I pre-theoretically care about happiness and suffering too. I hate suffering, and I hate inflicting suffering or knowing others are suffering. I like being happy, and like making others happy or knowing they’re happy. So it’s not really a process of teasing out, it’s a process of boiling down, by asking myself which things seem to matter intrinsically and which instrumentally. One way of doing this is to consider hypothetical situations, and selectively vary them and observe the difference each variation makes to my assessment of the situation. (edit: so that’s one place the ‘teasing out’ happens—I’ll work out what value set X implies about hypothetical scenarios a, b, and c, and see if I’m happy to endorse those implications. It’s probably roughly what Rawls meant by ‘reflective equilibrium’—induce principles, deduce their implications, repeat until you’re more or less satisfied.)
Basically, conscious states are the only things I have direct access to, and I ‘know’ (in a way that I couldn’t argue someone else into accepting, if they didn’t perceive it directly, but that is more obvious to me than just about anything else) that some of them are good and some of them are bad. Via emotional empathy and intellectual awareness of apparently relevant similarities, I deduce that other people and animals have a similar capacity for conscious experience, and that it’s good when they have pleasant experiences and bad when they have unpleasant ones. (edit: and these convictions are the ones I remain sure of, at the end of the boiling-down/reflective equilibrium process)
I think I’ll bow out of the discussion now—I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
>Well I pre-theoretically care about happiness and suffering too.
That you think this, and that it might be the case, for the record, wasn’t previously obvious to me, and makes a notch more sense out of the discussion.
For example, it makes me curious as to whether, when observing say a pre-civilization group of humans, I’d end up wanting to describe them as caring about happiness and suffering, beyond caring about various non-emotional things.
Ok, actually I can see a non-Goodharting reason to care about emotional states as such, though it’s still instrumental, so isn’t what tslarm was talking about: emotional states are blunt-force brain events, and so in a context (e.g. modern life) where the locality of emotions doesn’t fit into the locality of the demands of life, emotions are disruptive, especially suffering, or maybe more subtly any lack of happiness.
>I think I’ll bow out of the discussion now
Ok, thanks for engaging. Be well. Or I guess, be happy and unsufferful.
>I think we’ve both done our best, but to be blunt, I feel like I’m having to repeatedly assure you that I do mean the things I’ve said and I have thought about them, and like you are still trying to cure me of ‘mistakes’ that are only mistakes according to premises that seem almost too obvious for you to state, but that I really truly don’t share.
I don’t want to poke you more and risk making you engage when you don’t want to, but just as a signpost for future people, I’ll note that I don’t recognize this as describing what happened (except of course that you felt what you say you felt, and that’s evidence that I’m wrong about what happened).
Cheers. I won’t plug you into the experience machine if you don’t sign me up for cryonics :)
Deal! I’m glad we can realize gains from trade across metaphysical chasms.