When you said to suppose that “everything we want is [impossible]”, did you mean that literally? Because normally if what you want is impossible, you should start wanting a different thing (or do that super-saiyan effort thing if it’s that kind of impossible), but if everything is impossible, you couldn’t do that either. If there is no possible action that produces a favorable outcome, I can think of no reason to act at all.
(Of course, if I found myself in that situation, I would assume I made a math error or something and start trying to do things that I want and that I think I messed up when I decided that the thing was impossible.)
If you didn’t mean -everything-, then why not just start pursuing the thing which gives the most value which is possible to do?
I didn’t mean it literally. I meant, everything on which we base our long-term plans.
For example:
You go to school, save up money, try to get a good job, try to advance in your career… on the belief that you will find the results rewarding. However, this is pretty easily dismantled if you’re not a life-extensionist and/or cryonicist (and don’t believe in an afterlife). All it takes is for you to have the realization that
1) If your memory of an experience is erased thoroughly enough (and you don’t have access to anything external that will have been altered by the experience) then the experience might as well have not happened. Or insofar that it altered you through some other way than your memories, is interchangeable with any other experience that would have altered you in the same way.
2) In the absence of an afterlife, if you die all your memories get permanently deleted shortly after, and you have no further access to anything influenced by your past experiences including yourself. Therefore, death robs you of your past, present, and future making it as if you had never lived. Obviously other people will remember you for a while, but you will have no awareness of that because you will simply not exist.
Therefore, no matter what you do, it will get cancelled out completely. The way around it is to make a superhuman effort at doing the not-literally-prohibited-by-physics-as-far-as-we-know kind of impossible by working to make cryonics, anti-aging, uploading, or AI (which presumably will then do one of the preceding three for you) possible. But perhaps at an even deeper level our idea of what it is these courses of action are attempting to preserve is itself self-contradictory.
Does that necessarily discredit these courses of action?
If your memory of an experience is erased [...] then the experience might as well have not happened.
Why? If I have to choose between “happy for an hour, then memory-wiped” and “miserable for an hour, then memory-wiped” I unhesitatingly choose the former. Why should the fact that I won’t remember it mean that there’s no difference at all between the two? One of them involves someone being happy for an hour and the other someone being miserable for an hour.
death robs you of your past, present, and future making it as if you had never lived.
How so? Obviously my experience 100 years from now (i.e., no experience since I will most likely be very dead) will be the same as if I had never lived. But why on earth should what I care about now be determined by what I will be experiencing in 100 years?
I don’t understand this argument when I hear it from religious apologists (“Without our god everything is meaningless, because infinitely many years from now you will no longer exist! You need to derive all the meaning in your life from the whims of an alien superbeing!”) and I don’t understand it here either.
If you know you will be memory-wiped after an hour, it does not make sense to make long-term plans. For example, you can read a book you enjoy, if you value the feeling. But if you read a scientific book, I think the pleasure from learning would be somewhat spoiled by knowing that you are going to forget this all soon. The learning would mostly become a lost purpose, unless you can use the learned knowledge within the hour.
Knowing that you are unlikely to be alive after 100 years prevents you from making some plans which would be meaningful in a parallel universe where you are likely to live 1000 years. Some of those plans are good according to the values you have now, but are outside of your reach. Thus future death does not make life completely meaningless, but it ruins some value even now.
I do agree that there are things you might think you want that don’t really make sense given that in a few hundred years you’re likely to be long dead and your influence on the world is likely to be lost in the noise.
But that’s a long way from saying—as bokov seems to be—that this invalidates “everything on which we base our long-term plans”.
I wouldn’t spend the next hour reading a scientific book if I knew that at the end my brain would be reset to its prior state. But I will happily spend time reading a scientific book if, e.g., it will make my life more interesting for the next few years, or lead to higher income which I can use to retire earlier, buy nicer things, or give to charity, even if all those benefits take place only over (say) the next 20 years.
Perhaps I’m unusual, or perhaps I’m fooling myself, but it doesn’t seem to me as if my long-term plans, or anyone else’s, are predicated on living for ever or having influence that lasts for hundreds of years.
First of all, I’m really glad we’re having this conversation.
This question is the one philosophical issue that has been bugging me for several years. I read through your post and your comments and felt like someone was finally asking this question in a way that has a chance of being understood well enough to be resolved!
… then I began reading the replies, and it’s a strange thing, the inferential distance is so great in some places that I also begin to lose the meaning of your original question, even though I have the very same question.
Taking a step back—there is something fundamentally irrational about my personal concept of identity, existence and mortality.
I walk around with this subjective experience that I am so important, and my life is so important, and I want to live always. On the other hand, I know that my consciousness is not important objectively. There are two reasons for this. First, there is no objective morality—no ‘judger’ outside myself. This raises some issues for me, but since Less Wrong can address this to some extent, possibly more fully, lets put this aside for the time being. Secondly, even by my own subjective standards, my own consciousness is not important. In the aspects that matter to me, my consciousness and identity is identical to that of another. Me and my family could be replaced by another and I really don’t mind. (We could be replaced with sufficiently complex alien entities, and I don’t mind, or with computer simulations of entities I might not even recognize as persons, and I don’t mind, etc.)
So why does everything—in particular—my longevity and my happiness matter so much to me?
Sometimes I try to explain it in the following way: although “cerebrally” I should not care, I do exist, as a biological organism that is the product of evolution, and so I do care. I want to feel comfortable and happy, and that is a biological fact.
But I’m not really satisfied with this its-just-a-fact-that-I-care explanation. It seems that if I was more fully rational, I would (1) be able to assimilate in a more complete way that I am going to not exist sometime (I notice I continually act and feel as though my existence is forever, and this is tied in with continuing to invest in my values even though they insist they want to be tied to something that is objectively real) and (2) more consistently realize in a cerebral rather than biological way that my values and my happiness are not important to cerebral-me … and allow this to affect my behavior.
I’ve had this question forever, but I used to frame it as a theist. My observation as a child was that you worry about these things until you’re in an existential frenzy, and then you go downstairs and eat a turkey sandwich. There’s no resolution, so you just let biology take over.
But it seems there ought to be a resolution, or at the very least a moniker for the problem that could be used to point to it whenever you want to bring it up.
Can you say more about why “it’s just a fact that I care” is not satisfying? Because from my perspective that’s the proper resolution… we value what we value, we don’t value what we don’t value, what more is there to say?
Perhaps the issue is that I believe I should not care—that if I was more rational, I would not care.
That my values are based on a misunderstanding of reality, just as the title of this post.
In particular, my values seem to be pinned on ideas that are not true—that states of the universe matter, objectively rather than just subjectively, and that I exist forever/always.
This “pinning” doesn’t seem to be that critical—life goes on, and I eat a turkey sandwich when I get hungry. But it seems unfortunate that I should understand cerebrally (to the extent that I am capable) that my values are based on an illusion, but that my biology demands that I keep on as though my values were based on something real. To be very dramatic, it is like some concept of my ‘self’ is trapped in this non-nonsensical machine that keeps on eating and enjoying and caring like Sisyphus.
Put this way, it just sounds like a disconnect in the way our hardware and software evolved—my brain has evolved to think about how to satisfying certain goals supplied by biology, which often includes the meta-problem of prioritizing and evaluating these goals. The biology doesn’t care if the answer returned is ‘mu’ in the recursion, and furthermore doesn’t care if I’m at a step in this evolution where checking-out of the simulation-I’m-in seems just as reasonable an answer as any other course of action.
Fortunately, my organism just ignores those nihilistic opines. (Perhaps this ignoring also evolved, socially or more fundamentally in the hardware, as well.) I say fortunately, because I have other goals besides Tarski, or finding resolutions to these value conundrums.
In particular, my values seem to be pinned on ideas that are not true—that states of the universe matter, objectively rather than just subjectively, and that I exist forever/always.
Well, if they are, and if I understand what you mean by “pinned on,” then we should expect the strength of those values to weaken as you stop investing in those ideas.
I can’t tell from your discussion whether you don’t find this to be true (in which case I would question what makes you think the values are pinned on the ideas in the first place), or whether you’re unable to test because you haven’t been able to stop investing in those ideas in the first place.
If it’s the latter, though… what have you tried, and what failure modes have you encountered?
My values seem to be pinned on these ideas (the ones that are not true) because while I am in the process of caring about the things I care about, and especially when I am making a choice about something, I find that I am always making the assumption that these ideas are true—that the states of the universe matter and that I exist forever.
When it occurs to me to remember that these assumptions are not true, I feel a great deal of cognitive dissonance. However, the cognitive dissonance has no resolution. I think about it for a little while, go about my business, and discover some time later I forgot again.
I don’t know if a specific example will help or not. I am driving home, in traffic, and brain is happily buzzing with thoughts. I am thinking about all the people in cars around me and how I’m part of a huge social network and whether the traffic is as efficient as it could be and civilization and how I am going to go home and what I am going to do. And then I remember about death, the snuffing out of my awareness, and something about that just doesn’t connect. It’s like I can empathize with my own non-existence (hopefully this example is something more than just a moment of psychological disorder) and I feel that my current existence is a mirage. Or rather, the moral weight that I’ve given it doesn’t make sense. That’s what the cognitive dissonance feels like.
I want to add that I don’t believe I am that unusual. I think this need for an objective morality (objective value system) is why some people are naturally theists.
I also think that people who think wire-heading is a failure mode, must be in the same boat that I’m in.
we value what we value, we don’t value what we don’t value, what more is there to say?
I’m confused what you mean by this. If there wasn’t anything more to say, then nobody would/should ever change what they value? But people’s values changes over time, and that’s a good thing. For example in medieval/ancient times people didn’t value animals’ lives and well-being (as much) as we do today. If a medieval person tells you “well we value what we value, I don’t value animals, what more is there to say?”, would you agree with him and let him go on to burning cats for entertainment, or would you try to convince him that he should actually care about animals’ well-being?
You are of course using some of your values to instruct other values. But they need to be at least consistent and it’s not really clear which are the “more-terminal” ones. It seems to me byrnema is saying that privileging your own consciousness/identity above others is just not warranted, and if we could, we really should self-modify to not care more about one particular instance, but rather about how much well-being/eudaimonia (for example) there is in the world in general. It seems like this change would make your value system more consistent and less arbitrary and I’m sympathetic to this view.
But people’s values changes over time, and that’s a good thing. For example in medieval/ancient times people didn’t value animals’ lives and well-being (as much) as we do today. If a medieval person tells you “well we value what we value, I don’t value animals, what more is there to say?”, would you agree with him and let him go on to burning cats for entertainment, or would you try to convince him that he should actually care about animals’ well-being?
Is that an actual change in values? Or is it merely a change of facts—much greater availability of entertainment, much less death and cruelty in the world, and the knowledge that humans and animals are much more similar than it would have seemed to the medieval worldview?
The more I think about this question, the less certain I am that I know what an answer to it might even look like. What kinds of observations might be evidence one way or the other?
Do people who’ve changed their mind consider themselves to have different values from their past selves? Do we find that when someone has changed their mind, we can explain the relevant values in terms of some “more fundamental” value that’s just being applied to different observations (or different reasoning), or not?
Can we imagine a scenario where an entity with truly different values—the good ol’ paperclip maximizer—is persuaded to change them?
I guess that’s my real point—I wouldn’t even dream of trying to persuade a paperclip maximizer to start valuing human life (except insofar as live humans encourage the production of paperclips) - it values what it values, it doesn’t value what it doesn’t value, what more is there to say? To the extent that I would hope to persuade a medieval person to act more kindly towards animals, it would be because and in terms of the values that they already have, that would likely be mostly shared with mine.
So, if I start out treating animals badly, and then later start treating them kindly, that would be evidence of a pre-existing valuing of animals which was simply being masked by circumstances. Yes?
If I instead start out acting kindly to animals, and then later start treating them badly, is that similarly evidence of a pre-existing lack of valuing-animals which had previously been masked by circumstances? Or does it indicate that my existing, previously manifested, valuing of animals is now being masked by circumstances?
So, if I start out treating animals badly, and then later start treating them kindly, that would be evidence of a pre-existing valuing of animals which was simply being masked by circumstances. Yes?
Either that, or that your present kind-treating of animals is just a manifestation of circumstances, not a true value.
If I instead start out acting kindly to animals, and then later start treating them badly, is that similarly evidence of a pre-existing lack of valuing-animals which had previously been masked by circumstances? Or does it indicate that my existing, previously manifested, valuing of animals is now being masked by circumstances?
Could be either. To figure it out, we’d have to examine those surrounding circumstances and see what underlying values seemed consistent with your actions. Or we could assume that your values would likely be similar to those of other humans—so you probably value the welfare of entities that seem similar to yourself, or potential mates or offspring, and so value animals in proportion to how similar they seem under the circumstances and available information.
Well whether it’s a “real” change may be besides the point if you put it this way. Our situation and our knowledge are also changing, and maybe our behavior should also change. If personal identity and/or consciousness are not fundamental, how should we value those in a world where any mind-configurations can be created and copied at will?
So there’s a view that a rational entity should never change its values. If we accept that, then any entity with different values from present-me seems to be in some sense not a “natural successor” of present-me, even if it remembers being me and shares all my values. There seems to be a qualitative distinction between an entity like that and upload-me, even if there are several branching upload-mes that have undergone various experiences and would no doubt have different views on concrete issues than present-me.
But that’s just an intuition, and I don’t know whether it can be made rigorous.
Agreed that if someone expresses (either through speech or action) values that are opposed to mine, I might try to get them to accept my values and reject their own. And, sure, having set out to do that, there’s a lot more to be relevantly said about the mechanics of how we hold values, and how we give them up, and how they can be altered.
And you’re right, if our values are inconsistent (which they often are), we can be in this kind of relationship with ourselves… that is, if I can factor my values along two opposed vectors A and B, I might well try to get myself to accept A and reject B (or vice-versa, or both at once). Of course, we’re not obligated to do this by any means, but internal consistency is a common thing that people value, so it’s not surprising that we want to do it. So, sure… if what’s going on here is that byrnema has inconsistent values which can be factored along a “privilege my own identity”/”don’t privilege my own identity” axis, and they net-value consistency, then it makes sense for them to attempt to self-modify so that one of those vectors is suppressed.
With respect to my statement being confusing… I think you understood it perfectly, you were just disagreeing—and, as I say, you might well be correct about byrnema. Speaking personally, I seem to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency. “Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.”
Of course, I do certainly have both values, and (unsurprisingly) the parts of my mind that align with the latter value seem to believe that I ought to be more consistent about this, while the parts of my mind that align with the former don’t seem to have a problem with it.
I find I prefer being the parts of my mind that align with the former; we get along better.
to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency
As humans we can’t change/modify ourselves too much anyway, but what about if we’re able to in the future? If you can pick and choose your values? It seems to me that, for such entity, not valuing consistency is like not valuing logic. And then there’s the argument that it leaves you open for dutch booking / blackmail.
Yes, inconsistency leaves me open for dutch booking, which perfect consistency would not. Eliminating that susceptibility is not high on my list of self-improvements to work on, but I agree that it’s a failing.
Also, perceived inconsistency runs the risk of making me seen as unreliable, which has social costs. That said, being seen as reliable appears to be a fairly viable Schelling point among my various perspectives (as you say, the range is pretty small, globally speaking), so it’s not too much of a problem.
In a hypothetical future where the technology exists to radically alter my values relatively easily, I probably would not care nearly so much about flexibility of viewpoint as an intrinsic skill, much in the same way that electronic calculators made the ability to do logarithms in my head relatively valueless.
My position would be that actions speak louder than thoughts. If you act as though you value your own happiness more than that of others… maybe you really do value your own happiness more than that of others? If you like doing certain things, maybe you value those things—I don’t see anything irrational in that.
(It’s perfectly normal to self-deceive to believe our values are more selfless than they actually are. I wouldn’t feel guilty about it—similarly, if your actions are good it doesn’t really matter whether you’re doing them for the sake of other people or for your own satisfaction)
The other resolution I can see would be to accept that you really are a set of not-entirely-aligned entities, a pattern running on untrusted hardware. At which point parts of you can try and change other parts of you. That seems rather perilous though. FWIW I accept the meat and its sometimes-contradictory desires as part of me; it feels meaningless to draw lines inside my own brain.
Does that necessarily discredit these courses of action?
Yes, under the assumption that you only value things that future-you will feel the effects of. If this is true, then all courses of action are equally rational and it doesn’t matter what you do—you’re at null.
If you are such a being which values at least one thing that you will not directly experience, then the answer is no, these actions can still have worth. Most humans are like this, even if they don’t realize it.
The way around it is to make a superhuman effort at doing the not-literally-prohibited-by-physics-as-far-as-we-know kind of impossible by working to make cryonics, anti-aging, uploading, or AI (which presumably will then do one of the preceding three for you) possible.
When you said to suppose that “everything we want is [impossible]”, did you mean that literally? Because normally if what you want is impossible, you should start wanting a different thing (or do that super-saiyan effort thing if it’s that kind of impossible), but if everything is impossible, you couldn’t do that either. If there is no possible action that produces a favorable outcome, I can think of no reason to act at all.
(Of course, if I found myself in that situation, I would assume I made a math error or something and start trying to do things that I want and that I think I messed up when I decided that the thing was impossible.)
If you didn’t mean -everything-, then why not just start pursuing the thing which gives the most value which is possible to do?
Perhaps I misunderstood the question?
I didn’t mean it literally. I meant, everything on which we base our long-term plans.
For example:
You go to school, save up money, try to get a good job, try to advance in your career… on the belief that you will find the results rewarding. However, this is pretty easily dismantled if you’re not a life-extensionist and/or cryonicist (and don’t believe in an afterlife). All it takes is for you to have the realization that
1) If your memory of an experience is erased thoroughly enough (and you don’t have access to anything external that will have been altered by the experience) then the experience might as well have not happened. Or insofar that it altered you through some other way than your memories, is interchangeable with any other experience that would have altered you in the same way.
2) In the absence of an afterlife, if you die all your memories get permanently deleted shortly after, and you have no further access to anything influenced by your past experiences including yourself. Therefore, death robs you of your past, present, and future making it as if you had never lived. Obviously other people will remember you for a while, but you will have no awareness of that because you will simply not exist.
Therefore, no matter what you do, it will get cancelled out completely. The way around it is to make a superhuman effort at doing the not-literally-prohibited-by-physics-as-far-as-we-know kind of impossible by working to make cryonics, anti-aging, uploading, or AI (which presumably will then do one of the preceding three for you) possible. But perhaps at an even deeper level our idea of what it is these courses of action are attempting to preserve is itself self-contradictory.
Does that necessarily discredit these courses of action?
Why? If I have to choose between “happy for an hour, then memory-wiped” and “miserable for an hour, then memory-wiped” I unhesitatingly choose the former. Why should the fact that I won’t remember it mean that there’s no difference at all between the two? One of them involves someone being happy for an hour and the other someone being miserable for an hour.
How so? Obviously my experience 100 years from now (i.e., no experience since I will most likely be very dead) will be the same as if I had never lived. But why on earth should what I care about now be determined by what I will be experiencing in 100 years?
I don’t understand this argument when I hear it from religious apologists (“Without our god everything is meaningless, because infinitely many years from now you will no longer exist! You need to derive all the meaning in your life from the whims of an alien superbeing!”) and I don’t understand it here either.
If you know you will be memory-wiped after an hour, it does not make sense to make long-term plans. For example, you can read a book you enjoy, if you value the feeling. But if you read a scientific book, I think the pleasure from learning would be somewhat spoiled by knowing that you are going to forget this all soon. The learning would mostly become a lost purpose, unless you can use the learned knowledge within the hour.
Knowing that you are unlikely to be alive after 100 years prevents you from making some plans which would be meaningful in a parallel universe where you are likely to live 1000 years. Some of those plans are good according to the values you have now, but are outside of your reach. Thus future death does not make life completely meaningless, but it ruins some value even now.
I do agree that there are things you might think you want that don’t really make sense given that in a few hundred years you’re likely to be long dead and your influence on the world is likely to be lost in the noise.
But that’s a long way from saying—as bokov seems to be—that this invalidates “everything on which we base our long-term plans”.
I wouldn’t spend the next hour reading a scientific book if I knew that at the end my brain would be reset to its prior state. But I will happily spend time reading a scientific book if, e.g., it will make my life more interesting for the next few years, or lead to higher income which I can use to retire earlier, buy nicer things, or give to charity, even if all those benefits take place only over (say) the next 20 years.
Perhaps I’m unusual, or perhaps I’m fooling myself, but it doesn’t seem to me as if my long-term plans, or anyone else’s, are predicated on living for ever or having influence that lasts for hundreds of years.
First of all, I’m really glad we’re having this conversation.
This question is the one philosophical issue that has been bugging me for several years. I read through your post and your comments and felt like someone was finally asking this question in a way that has a chance of being understood well enough to be resolved!
… then I began reading the replies, and it’s a strange thing, the inferential distance is so great in some places that I also begin to lose the meaning of your original question, even though I have the very same question.
Taking a step back—there is something fundamentally irrational about my personal concept of identity, existence and mortality.
I walk around with this subjective experience that I am so important, and my life is so important, and I want to live always. On the other hand, I know that my consciousness is not important objectively. There are two reasons for this. First, there is no objective morality—no ‘judger’ outside myself. This raises some issues for me, but since Less Wrong can address this to some extent, possibly more fully, lets put this aside for the time being. Secondly, even by my own subjective standards, my own consciousness is not important. In the aspects that matter to me, my consciousness and identity is identical to that of another. Me and my family could be replaced by another and I really don’t mind. (We could be replaced with sufficiently complex alien entities, and I don’t mind, or with computer simulations of entities I might not even recognize as persons, and I don’t mind, etc.)
So why does everything—in particular—my longevity and my happiness matter so much to me?
Sometimes I try to explain it in the following way: although “cerebrally” I should not care, I do exist, as a biological organism that is the product of evolution, and so I do care. I want to feel comfortable and happy, and that is a biological fact.
But I’m not really satisfied with this its-just-a-fact-that-I-care explanation. It seems that if I was more fully rational, I would (1) be able to assimilate in a more complete way that I am going to not exist sometime (I notice I continually act and feel as though my existence is forever, and this is tied in with continuing to invest in my values even though they insist they want to be tied to something that is objectively real) and (2) more consistently realize in a cerebral rather than biological way that my values and my happiness are not important to cerebral-me … and allow this to affect my behavior.
I’ve had this question forever, but I used to frame it as a theist. My observation as a child was that you worry about these things until you’re in an existential frenzy, and then you go downstairs and eat a turkey sandwich. There’s no resolution, so you just let biology take over.
But it seems there ought to be a resolution, or at the very least a moniker for the problem that could be used to point to it whenever you want to bring it up.
Can you say more about why “it’s just a fact that I care” is not satisfying? Because from my perspective that’s the proper resolution… we value what we value, we don’t value what we don’t value, what more is there to say?
It is a fact that I care, we agree.
Perhaps the issue is that I believe I should not care—that if I was more rational, I would not care.
That my values are based on a misunderstanding of reality, just as the title of this post.
In particular, my values seem to be pinned on ideas that are not true—that states of the universe matter, objectively rather than just subjectively, and that I exist forever/always.
This “pinning” doesn’t seem to be that critical—life goes on, and I eat a turkey sandwich when I get hungry. But it seems unfortunate that I should understand cerebrally (to the extent that I am capable) that my values are based on an illusion, but that my biology demands that I keep on as though my values were based on something real. To be very dramatic, it is like some concept of my ‘self’ is trapped in this non-nonsensical machine that keeps on eating and enjoying and caring like Sisyphus.
Put this way, it just sounds like a disconnect in the way our hardware and software evolved—my brain has evolved to think about how to satisfying certain goals supplied by biology, which often includes the meta-problem of prioritizing and evaluating these goals. The biology doesn’t care if the answer returned is ‘mu’ in the recursion, and furthermore doesn’t care if I’m at a step in this evolution where checking-out of the simulation-I’m-in seems just as reasonable an answer as any other course of action.
Fortunately, my organism just ignores those nihilistic opines. (Perhaps this ignoring also evolved, socially or more fundamentally in the hardware, as well.) I say fortunately, because I have other goals besides Tarski, or finding resolutions to these value conundrums.
Well, if they are, and if I understand what you mean by “pinned on,” then we should expect the strength of those values to weaken as you stop investing in those ideas.
I can’t tell from your discussion whether you don’t find this to be true (in which case I would question what makes you think the values are pinned on the ideas in the first place), or whether you’re unable to test because you haven’t been able to stop investing in those ideas in the first place.
If it’s the latter, though… what have you tried, and what failure modes have you encountered?
My values seem to be pinned on these ideas (the ones that are not true) because while I am in the process of caring about the things I care about, and especially when I am making a choice about something, I find that I am always making the assumption that these ideas are true—that the states of the universe matter and that I exist forever.
When it occurs to me to remember that these assumptions are not true, I feel a great deal of cognitive dissonance. However, the cognitive dissonance has no resolution. I think about it for a little while, go about my business, and discover some time later I forgot again.
I don’t know if a specific example will help or not. I am driving home, in traffic, and brain is happily buzzing with thoughts. I am thinking about all the people in cars around me and how I’m part of a huge social network and whether the traffic is as efficient as it could be and civilization and how I am going to go home and what I am going to do. And then I remember about death, the snuffing out of my awareness, and something about that just doesn’t connect. It’s like I can empathize with my own non-existence (hopefully this example is something more than just a moment of psychological disorder) and I feel that my current existence is a mirage. Or rather, the moral weight that I’ve given it doesn’t make sense. That’s what the cognitive dissonance feels like.
I want to add that I don’t believe I am that unusual. I think this need for an objective morality (objective value system) is why some people are naturally theists.
I also think that people who think wire-heading is a failure mode, must be in the same boat that I’m in.
I’m confused what you mean by this. If there wasn’t anything more to say, then nobody would/should ever change what they value? But people’s values changes over time, and that’s a good thing. For example in medieval/ancient times people didn’t value animals’ lives and well-being (as much) as we do today. If a medieval person tells you “well we value what we value, I don’t value animals, what more is there to say?”, would you agree with him and let him go on to burning cats for entertainment, or would you try to convince him that he should actually care about animals’ well-being?
You are of course using some of your values to instruct other values. But they need to be at least consistent and it’s not really clear which are the “more-terminal” ones. It seems to me byrnema is saying that privileging your own consciousness/identity above others is just not warranted, and if we could, we really should self-modify to not care more about one particular instance, but rather about how much well-being/eudaimonia (for example) there is in the world in general. It seems like this change would make your value system more consistent and less arbitrary and I’m sympathetic to this view.
Is that an actual change in values? Or is it merely a change of facts—much greater availability of entertainment, much less death and cruelty in the world, and the knowledge that humans and animals are much more similar than it would have seemed to the medieval worldview?
The more I think about this question, the less certain I am that I know what an answer to it might even look like.
What kinds of observations might be evidence one way or the other?
Do people who’ve changed their mind consider themselves to have different values from their past selves? Do we find that when someone has changed their mind, we can explain the relevant values in terms of some “more fundamental” value that’s just being applied to different observations (or different reasoning), or not? Can we imagine a scenario where an entity with truly different values—the good ol’ paperclip maximizer—is persuaded to change them?
I guess that’s my real point—I wouldn’t even dream of trying to persuade a paperclip maximizer to start valuing human life (except insofar as live humans encourage the production of paperclips) - it values what it values, it doesn’t value what it doesn’t value, what more is there to say? To the extent that I would hope to persuade a medieval person to act more kindly towards animals, it would be because and in terms of the values that they already have, that would likely be mostly shared with mine.
So, if I start out treating animals badly, and then later start treating them kindly, that would be evidence of a pre-existing valuing of animals which was simply being masked by circumstances. Yes?
If I instead start out acting kindly to animals, and then later start treating them badly, is that similarly evidence of a pre-existing lack of valuing-animals which had previously been masked by circumstances? Or does it indicate that my existing, previously manifested, valuing of animals is now being masked by circumstances?
Either that, or that your present kind-treating of animals is just a manifestation of circumstances, not a true value.
Could be either. To figure it out, we’d have to examine those surrounding circumstances and see what underlying values seemed consistent with your actions. Or we could assume that your values would likely be similar to those of other humans—so you probably value the welfare of entities that seem similar to yourself, or potential mates or offspring, and so value animals in proportion to how similar they seem under the circumstances and available information.
(nods) Fair enough. Thanks for the clarification.
Well whether it’s a “real” change may be besides the point if you put it this way. Our situation and our knowledge are also changing, and maybe our behavior should also change. If personal identity and/or consciousness are not fundamental, how should we value those in a world where any mind-configurations can be created and copied at will?
So there’s a view that a rational entity should never change its values. If we accept that, then any entity with different values from present-me seems to be in some sense not a “natural successor” of present-me, even if it remembers being me and shares all my values. There seems to be a qualitative distinction between an entity like that and upload-me, even if there are several branching upload-mes that have undergone various experiences and would no doubt have different views on concrete issues than present-me.
But that’s just an intuition, and I don’t know whether it can be made rigorous.
Fair enough.
Agreed that if someone expresses (either through speech or action) values that are opposed to mine, I might try to get them to accept my values and reject their own. And, sure, having set out to do that, there’s a lot more to be relevantly said about the mechanics of how we hold values, and how we give them up, and how they can be altered.
And you’re right, if our values are inconsistent (which they often are), we can be in this kind of relationship with ourselves… that is, if I can factor my values along two opposed vectors A and B, I might well try to get myself to accept A and reject B (or vice-versa, or both at once). Of course, we’re not obligated to do this by any means, but internal consistency is a common thing that people value, so it’s not surprising that we want to do it. So, sure… if what’s going on here is that byrnema has inconsistent values which can be factored along a “privilege my own identity”/”don’t privilege my own identity” axis, and they net-value consistency, then it makes sense for them to attempt to self-modify so that one of those vectors is suppressed.
With respect to my statement being confusing… I think you understood it perfectly, you were just disagreeing—and, as I say, you might well be correct about byrnema. Speaking personally, I seem to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency. “Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.”
Of course, I do certainly have both values, and (unsurprisingly) the parts of my mind that align with the latter value seem to believe that I ought to be more consistent about this, while the parts of my mind that align with the former don’t seem to have a problem with it.
I find I prefer being the parts of my mind that align with the former; we get along better.
As humans we can’t change/modify ourselves too much anyway, but what about if we’re able to in the future? If you can pick and choose your values? It seems to me that, for such entity, not valuing consistency is like not valuing logic. And then there’s the argument that it leaves you open for dutch booking / blackmail.
Yes, inconsistency leaves me open for dutch booking, which perfect consistency would not. Eliminating that susceptibility is not high on my list of self-improvements to work on, but I agree that it’s a failing.
Also, perceived inconsistency runs the risk of making me seen as unreliable, which has social costs. That said, being seen as reliable appears to be a fairly viable Schelling point among my various perspectives (as you say, the range is pretty small, globally speaking), so it’s not too much of a problem.
In a hypothetical future where the technology exists to radically alter my values relatively easily, I probably would not care nearly so much about flexibility of viewpoint as an intrinsic skill, much in the same way that electronic calculators made the ability to do logarithms in my head relatively valueless.
My position would be that actions speak louder than thoughts. If you act as though you value your own happiness more than that of others… maybe you really do value your own happiness more than that of others? If you like doing certain things, maybe you value those things—I don’t see anything irrational in that.
(It’s perfectly normal to self-deceive to believe our values are more selfless than they actually are. I wouldn’t feel guilty about it—similarly, if your actions are good it doesn’t really matter whether you’re doing them for the sake of other people or for your own satisfaction)
The other resolution I can see would be to accept that you really are a set of not-entirely-aligned entities, a pattern running on untrusted hardware. At which point parts of you can try and change other parts of you. That seems rather perilous though. FWIW I accept the meat and its sometimes-contradictory desires as part of me; it feels meaningless to draw lines inside my own brain.
Yes, this is where I’m at.
Yes, under the assumption that you only value things that future-you will feel the effects of. If this is true, then all courses of action are equally rational and it doesn’t matter what you do—you’re at null.
If you are such a being which values at least one thing that you will not directly experience, then the answer is no, these actions can still have worth. Most humans are like this, even if they don’t realize it.
Well...you’ll still die eventually.