I agree. I have the sense that there is some depth to the cognitive machinery that leaves people so susceptible to this particular pattern of thinking, namely: no, I am an X optimizer, for some X for which we have some (often weak) theoretical reason to claim we might be an X optimizer. Once someone decides to view themselves as an X optimizer, it can be very difficult to convince them to pay enough attention to their own direct experience to notice that they are not an X optimizer.
More disturbingly, it seems as if people can go some distance to actually making themselves into an X optimizer. For example, a lot of people start out in young adulthood incorrectly believing that what they ultimately value is money, and then, by the end of their life, they have shifted their behavior so it looks more and more like what they really value is, ultimately, money. Nobody goes even close to all the way there—not really—but a mistaken view that one is truly optimizing for X really can shift things in the direction of making it true.
It’s as if we have too many degrees of freedom in how to explain our externally visible behavior in terms of values, so for most any X, if we really want to explain our visible behavior as resulting from really truly valuing X then we can, and then we can make that true.
Individuals who shape the world, are often those who have ended up being optimizers.
It sounds like you find that claim disturbing, but I don’t think it’s all bad.
I’m interested in more of a sense of what mistake you think people are making, because I think caring about something strong enough to change who you are around it can be a very positive force in the world.
I’m interested in more of a sense of what mistake you think people are making, because I think caring about something strong enough to change who you are around it can be a very positive force in the world.
Yeah, caring about something enough to change who you are is really one of the highest forms of virtue, as far as I’m concerned. It’s somewhat tragic that the very thing that makes us capable of this high form of virtue—our capacity to deliberately shift what we value—can also be used to take what was once an instrumental value and make it, more or less, into a terminal value. And generally, when we make an instrumental value into a terminal value (or go as far as we can in that direction), things go really badly, because we ourselves become an optimizer for something that is harmless when pursued as an instrumental value (like paperclips), but is devastating when pursued as a terminal value (like paperclips).
So the upshot is: to the extent that we are allowing instrumental values to become more-or-less terminal values without really deliberately choosing that or having a good reason to allow it, I think that’s a mistake. To the extent that we are shifting our values in service of what which is truly worth protecting, I think that’s really virtuous.
The really interesting question as far as I’m concerned is what the thing is that we rightly change our values in service of? In this community, we often take that thing to be representable as a utility function over physical world states. But it may not be representable that way. In Buddhism the thing is conceived of as the final end of suffering. In western moral philosophy there are all kinds of different ways of conceiving of that thing, and I don’t think all that many of them can be represented as a utility function over physical world states. In this community we tend to side-step object-level ethical philosophy to some extent, and I think that may be our biggest mistake.
Individuals who shape the world, are often those who have ended up being optimizers.
It might be worth fleshing this claim out because it doesn’t seem clear to me (interpreting “often” so the claim is non-trivial). Isn’t the world mostly shaped by ideas? Aren’t ideas mostly generated by people who are especially explore rather than exploit? Isn’t explore rather than exploit at least on the surface, and maybe more deeply, not an instance of “being an optimizer”? I mean, a true optimizer would certainly explore a lot. But it doesn’t seem so straightforward to interpret individual humans this way. Maybe the story could be that individual humans who bring out novel ideas are participating as a part of some broader optimizer, but this would need fleshing out. And your statement connotes, to me, optimizers in the sense of, like, Napoleon or something, which is a plausible but different picture of what shapes the world. Yet another picture would be “low-level emergent social forces”.
I agree. I have the sense that there is some depth to the cognitive machinery that leaves people so susceptible to this particular pattern of thinking, namely: no, I am an X optimizer, for some X for which we have some (often weak) theoretical reason to claim we might be an X optimizer. Once someone decides to view themselves as an X optimizer, it can be very difficult to convince them to pay enough attention to their own direct experience to notice that they are not an X optimizer.
More disturbingly, it seems as if people can go some distance to actually making themselves into an X optimizer. For example, a lot of people start out in young adulthood incorrectly believing that what they ultimately value is money, and then, by the end of their life, they have shifted their behavior so it looks more and more like what they really value is, ultimately, money. Nobody goes even close to all the way there—not really—but a mistaken view that one is truly optimizing for X really can shift things in the direction of making it true.
It’s as if we have too many degrees of freedom in how to explain our externally visible behavior in terms of values, so for most any X, if we really want to explain our visible behavior as resulting from really truly valuing X then we can, and then we can make that true.
Individuals who shape the world, are often those who have ended up being optimizers.
It sounds like you find that claim disturbing, but I don’t think it’s all bad.
I’m interested in more of a sense of what mistake you think people are making, because I think caring about something strong enough to change who you are around it can be a very positive force in the world.
Yeah, caring about something enough to change who you are is really one of the highest forms of virtue, as far as I’m concerned. It’s somewhat tragic that the very thing that makes us capable of this high form of virtue—our capacity to deliberately shift what we value—can also be used to take what was once an instrumental value and make it, more or less, into a terminal value. And generally, when we make an instrumental value into a terminal value (or go as far as we can in that direction), things go really badly, because we ourselves become an optimizer for something that is harmless when pursued as an instrumental value (like paperclips), but is devastating when pursued as a terminal value (like paperclips).
So the upshot is: to the extent that we are allowing instrumental values to become more-or-less terminal values without really deliberately choosing that or having a good reason to allow it, I think that’s a mistake. To the extent that we are shifting our values in service of what which is truly worth protecting, I think that’s really virtuous.
The really interesting question as far as I’m concerned is what the thing is that we rightly change our values in service of? In this community, we often take that thing to be representable as a utility function over physical world states. But it may not be representable that way. In Buddhism the thing is conceived of as the final end of suffering. In western moral philosophy there are all kinds of different ways of conceiving of that thing, and I don’t think all that many of them can be represented as a utility function over physical world states. In this community we tend to side-step object-level ethical philosophy to some extent, and I think that may be our biggest mistake.
It might be worth fleshing this claim out because it doesn’t seem clear to me (interpreting “often” so the claim is non-trivial). Isn’t the world mostly shaped by ideas? Aren’t ideas mostly generated by people who are especially explore rather than exploit? Isn’t explore rather than exploit at least on the surface, and maybe more deeply, not an instance of “being an optimizer”? I mean, a true optimizer would certainly explore a lot. But it doesn’t seem so straightforward to interpret individual humans this way. Maybe the story could be that individual humans who bring out novel ideas are participating as a part of some broader optimizer, but this would need fleshing out. And your statement connotes, to me, optimizers in the sense of, like, Napoleon or something, which is a plausible but different picture of what shapes the world. Yet another picture would be “low-level emergent social forces”.