we value what we value, we don’t value what we don’t value, what more is there to say?
I’m confused what you mean by this. If there wasn’t anything more to say, then nobody would/should ever change what they value? But people’s values changes over time, and that’s a good thing. For example in medieval/ancient times people didn’t value animals’ lives and well-being (as much) as we do today. If a medieval person tells you “well we value what we value, I don’t value animals, what more is there to say?”, would you agree with him and let him go on to burning cats for entertainment, or would you try to convince him that he should actually care about animals’ well-being?
You are of course using some of your values to instruct other values. But they need to be at least consistent and it’s not really clear which are the “more-terminal” ones. It seems to me byrnema is saying that privileging your own consciousness/identity above others is just not warranted, and if we could, we really should self-modify to not care more about one particular instance, but rather about how much well-being/eudaimonia (for example) there is in the world in general. It seems like this change would make your value system more consistent and less arbitrary and I’m sympathetic to this view.
But people’s values changes over time, and that’s a good thing. For example in medieval/ancient times people didn’t value animals’ lives and well-being (as much) as we do today. If a medieval person tells you “well we value what we value, I don’t value animals, what more is there to say?”, would you agree with him and let him go on to burning cats for entertainment, or would you try to convince him that he should actually care about animals’ well-being?
Is that an actual change in values? Or is it merely a change of facts—much greater availability of entertainment, much less death and cruelty in the world, and the knowledge that humans and animals are much more similar than it would have seemed to the medieval worldview?
The more I think about this question, the less certain I am that I know what an answer to it might even look like. What kinds of observations might be evidence one way or the other?
Do people who’ve changed their mind consider themselves to have different values from their past selves? Do we find that when someone has changed their mind, we can explain the relevant values in terms of some “more fundamental” value that’s just being applied to different observations (or different reasoning), or not?
Can we imagine a scenario where an entity with truly different values—the good ol’ paperclip maximizer—is persuaded to change them?
I guess that’s my real point—I wouldn’t even dream of trying to persuade a paperclip maximizer to start valuing human life (except insofar as live humans encourage the production of paperclips) - it values what it values, it doesn’t value what it doesn’t value, what more is there to say? To the extent that I would hope to persuade a medieval person to act more kindly towards animals, it would be because and in terms of the values that they already have, that would likely be mostly shared with mine.
So, if I start out treating animals badly, and then later start treating them kindly, that would be evidence of a pre-existing valuing of animals which was simply being masked by circumstances. Yes?
If I instead start out acting kindly to animals, and then later start treating them badly, is that similarly evidence of a pre-existing lack of valuing-animals which had previously been masked by circumstances? Or does it indicate that my existing, previously manifested, valuing of animals is now being masked by circumstances?
So, if I start out treating animals badly, and then later start treating them kindly, that would be evidence of a pre-existing valuing of animals which was simply being masked by circumstances. Yes?
Either that, or that your present kind-treating of animals is just a manifestation of circumstances, not a true value.
If I instead start out acting kindly to animals, and then later start treating them badly, is that similarly evidence of a pre-existing lack of valuing-animals which had previously been masked by circumstances? Or does it indicate that my existing, previously manifested, valuing of animals is now being masked by circumstances?
Could be either. To figure it out, we’d have to examine those surrounding circumstances and see what underlying values seemed consistent with your actions. Or we could assume that your values would likely be similar to those of other humans—so you probably value the welfare of entities that seem similar to yourself, or potential mates or offspring, and so value animals in proportion to how similar they seem under the circumstances and available information.
Well whether it’s a “real” change may be besides the point if you put it this way. Our situation and our knowledge are also changing, and maybe our behavior should also change. If personal identity and/or consciousness are not fundamental, how should we value those in a world where any mind-configurations can be created and copied at will?
So there’s a view that a rational entity should never change its values. If we accept that, then any entity with different values from present-me seems to be in some sense not a “natural successor” of present-me, even if it remembers being me and shares all my values. There seems to be a qualitative distinction between an entity like that and upload-me, even if there are several branching upload-mes that have undergone various experiences and would no doubt have different views on concrete issues than present-me.
But that’s just an intuition, and I don’t know whether it can be made rigorous.
Agreed that if someone expresses (either through speech or action) values that are opposed to mine, I might try to get them to accept my values and reject their own. And, sure, having set out to do that, there’s a lot more to be relevantly said about the mechanics of how we hold values, and how we give them up, and how they can be altered.
And you’re right, if our values are inconsistent (which they often are), we can be in this kind of relationship with ourselves… that is, if I can factor my values along two opposed vectors A and B, I might well try to get myself to accept A and reject B (or vice-versa, or both at once). Of course, we’re not obligated to do this by any means, but internal consistency is a common thing that people value, so it’s not surprising that we want to do it. So, sure… if what’s going on here is that byrnema has inconsistent values which can be factored along a “privilege my own identity”/”don’t privilege my own identity” axis, and they net-value consistency, then it makes sense for them to attempt to self-modify so that one of those vectors is suppressed.
With respect to my statement being confusing… I think you understood it perfectly, you were just disagreeing—and, as I say, you might well be correct about byrnema. Speaking personally, I seem to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency. “Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.”
Of course, I do certainly have both values, and (unsurprisingly) the parts of my mind that align with the latter value seem to believe that I ought to be more consistent about this, while the parts of my mind that align with the former don’t seem to have a problem with it.
I find I prefer being the parts of my mind that align with the former; we get along better.
to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency
As humans we can’t change/modify ourselves too much anyway, but what about if we’re able to in the future? If you can pick and choose your values? It seems to me that, for such entity, not valuing consistency is like not valuing logic. And then there’s the argument that it leaves you open for dutch booking / blackmail.
Yes, inconsistency leaves me open for dutch booking, which perfect consistency would not. Eliminating that susceptibility is not high on my list of self-improvements to work on, but I agree that it’s a failing.
Also, perceived inconsistency runs the risk of making me seen as unreliable, which has social costs. That said, being seen as reliable appears to be a fairly viable Schelling point among my various perspectives (as you say, the range is pretty small, globally speaking), so it’s not too much of a problem.
In a hypothetical future where the technology exists to radically alter my values relatively easily, I probably would not care nearly so much about flexibility of viewpoint as an intrinsic skill, much in the same way that electronic calculators made the ability to do logarithms in my head relatively valueless.
I’m confused what you mean by this. If there wasn’t anything more to say, then nobody would/should ever change what they value? But people’s values changes over time, and that’s a good thing. For example in medieval/ancient times people didn’t value animals’ lives and well-being (as much) as we do today. If a medieval person tells you “well we value what we value, I don’t value animals, what more is there to say?”, would you agree with him and let him go on to burning cats for entertainment, or would you try to convince him that he should actually care about animals’ well-being?
You are of course using some of your values to instruct other values. But they need to be at least consistent and it’s not really clear which are the “more-terminal” ones. It seems to me byrnema is saying that privileging your own consciousness/identity above others is just not warranted, and if we could, we really should self-modify to not care more about one particular instance, but rather about how much well-being/eudaimonia (for example) there is in the world in general. It seems like this change would make your value system more consistent and less arbitrary and I’m sympathetic to this view.
Is that an actual change in values? Or is it merely a change of facts—much greater availability of entertainment, much less death and cruelty in the world, and the knowledge that humans and animals are much more similar than it would have seemed to the medieval worldview?
The more I think about this question, the less certain I am that I know what an answer to it might even look like.
What kinds of observations might be evidence one way or the other?
Do people who’ve changed their mind consider themselves to have different values from their past selves? Do we find that when someone has changed their mind, we can explain the relevant values in terms of some “more fundamental” value that’s just being applied to different observations (or different reasoning), or not? Can we imagine a scenario where an entity with truly different values—the good ol’ paperclip maximizer—is persuaded to change them?
I guess that’s my real point—I wouldn’t even dream of trying to persuade a paperclip maximizer to start valuing human life (except insofar as live humans encourage the production of paperclips) - it values what it values, it doesn’t value what it doesn’t value, what more is there to say? To the extent that I would hope to persuade a medieval person to act more kindly towards animals, it would be because and in terms of the values that they already have, that would likely be mostly shared with mine.
So, if I start out treating animals badly, and then later start treating them kindly, that would be evidence of a pre-existing valuing of animals which was simply being masked by circumstances. Yes?
If I instead start out acting kindly to animals, and then later start treating them badly, is that similarly evidence of a pre-existing lack of valuing-animals which had previously been masked by circumstances? Or does it indicate that my existing, previously manifested, valuing of animals is now being masked by circumstances?
Either that, or that your present kind-treating of animals is just a manifestation of circumstances, not a true value.
Could be either. To figure it out, we’d have to examine those surrounding circumstances and see what underlying values seemed consistent with your actions. Or we could assume that your values would likely be similar to those of other humans—so you probably value the welfare of entities that seem similar to yourself, or potential mates or offspring, and so value animals in proportion to how similar they seem under the circumstances and available information.
(nods) Fair enough. Thanks for the clarification.
Well whether it’s a “real” change may be besides the point if you put it this way. Our situation and our knowledge are also changing, and maybe our behavior should also change. If personal identity and/or consciousness are not fundamental, how should we value those in a world where any mind-configurations can be created and copied at will?
So there’s a view that a rational entity should never change its values. If we accept that, then any entity with different values from present-me seems to be in some sense not a “natural successor” of present-me, even if it remembers being me and shares all my values. There seems to be a qualitative distinction between an entity like that and upload-me, even if there are several branching upload-mes that have undergone various experiences and would no doubt have different views on concrete issues than present-me.
But that’s just an intuition, and I don’t know whether it can be made rigorous.
Fair enough.
Agreed that if someone expresses (either through speech or action) values that are opposed to mine, I might try to get them to accept my values and reject their own. And, sure, having set out to do that, there’s a lot more to be relevantly said about the mechanics of how we hold values, and how we give them up, and how they can be altered.
And you’re right, if our values are inconsistent (which they often are), we can be in this kind of relationship with ourselves… that is, if I can factor my values along two opposed vectors A and B, I might well try to get myself to accept A and reject B (or vice-versa, or both at once). Of course, we’re not obligated to do this by any means, but internal consistency is a common thing that people value, so it’s not surprising that we want to do it. So, sure… if what’s going on here is that byrnema has inconsistent values which can be factored along a “privilege my own identity”/”don’t privilege my own identity” axis, and they net-value consistency, then it makes sense for them to attempt to self-modify so that one of those vectors is suppressed.
With respect to my statement being confusing… I think you understood it perfectly, you were just disagreeing—and, as I say, you might well be correct about byrnema. Speaking personally, I seem to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency. “Do I contradict myself? Very well, then I contradict myself, I am large, I contain multitudes.”
Of course, I do certainly have both values, and (unsurprisingly) the parts of my mind that align with the latter value seem to believe that I ought to be more consistent about this, while the parts of my mind that align with the former don’t seem to have a problem with it.
I find I prefer being the parts of my mind that align with the former; we get along better.
As humans we can’t change/modify ourselves too much anyway, but what about if we’re able to in the future? If you can pick and choose your values? It seems to me that, for such entity, not valuing consistency is like not valuing logic. And then there’s the argument that it leaves you open for dutch booking / blackmail.
Yes, inconsistency leaves me open for dutch booking, which perfect consistency would not. Eliminating that susceptibility is not high on my list of self-improvements to work on, but I agree that it’s a failing.
Also, perceived inconsistency runs the risk of making me seen as unreliable, which has social costs. That said, being seen as reliable appears to be a fairly viable Schelling point among my various perspectives (as you say, the range is pretty small, globally speaking), so it’s not too much of a problem.
In a hypothetical future where the technology exists to radically alter my values relatively easily, I probably would not care nearly so much about flexibility of viewpoint as an intrinsic skill, much in the same way that electronic calculators made the ability to do logarithms in my head relatively valueless.