It’s helpful to keep in mind the human hubris in thinking anyone knows what’s optimal for themselves, let alone others. Add in actual individual divergence in goals and beliefs and it’s kind of ludicrous to try to make many decisions for others, or to accept others’ decisions about your behaviors. Note that policy and rulemaking is always about enforcement/influence on others.
I don’t believe it’s possible for normal humans to fully distinguish “what’s good for my personal indexical experiences” and “what’s good for the average or median human”. It’s _always_ a mix of cooperative and adversarial. I do believe it’s possible to acknowledge both motives and to be humble about what limits I’ll impose on others. When I talk about “freedom” in that context, this is what it means to me: very minimal human imposition of additional consequences for actions which don’t have obvious, immediate harm.
Choosing “optimal for my current beliefs and preferences” vs “what others will judge as optimal for what they think my beliefs and preferences should be” is very different, and I lean toward the former as my definition of “freedom”.
Yes. Even if what I actually want is “freedom to do the optimal thing”, it is strategically better to fight for “freedom to do the arbitrary thing”. The latter allows me to do the former. But if we only have the freedom to do the optimal thing, and the people with power disagree with me about what is optimal, I get neither.
But how do the two things in the last paragraph mix if I have (1) a preference for others to judge me well, (2) a belief that others will judge me well if they believe I am doing what they believe is optimal for what they think my beliefs and preferences should be, and (3) a belief that the extrapolated cost of convincing them that I am doing such a thing without actually doing the thing is so incredibly high as to make plans involving that almost never show up in decision-making processes?
Put another way, it seems like the two definitions can collapse in a sufficiently low-privacy conformist environment—which can be unified with the emotion of “freedom”, but at least in most Western contexts, that seems infrequent. The impression I get is that most people obvious-patch around this by trying to extrapolate “what a version of me completely removed from peer pressures would prefer” and using that as the preference baseline, but I both think and feel that that’s incoherent. (Further meta, I also get the impression that many people don’t feel that it’s incoherent even if they would agree cognitively that it is, and that that leads to a lot of worldmodel divergence down the line.)
(I realize this might be a bit off-track from its parent comment, but I think it’s relevant to the broader discussion.)
It’s helpful to keep in mind the human hubris in thinking anyone knows what’s optimal for themselves, let alone others. Add in actual individual divergence in goals and beliefs and it’s kind of ludicrous to try to make many decisions for others, or to accept others’ decisions about your behaviors. Note that policy and rulemaking is always about enforcement/influence on others.
I don’t believe it’s possible for normal humans to fully distinguish “what’s good for my personal indexical experiences” and “what’s good for the average or median human”. It’s _always_ a mix of cooperative and adversarial. I do believe it’s possible to acknowledge both motives and to be humble about what limits I’ll impose on others. When I talk about “freedom” in that context, this is what it means to me: very minimal human imposition of additional consequences for actions which don’t have obvious, immediate harm.
Choosing “optimal for my current beliefs and preferences” vs “what others will judge as optimal for what they think my beliefs and preferences should be” is very different, and I lean toward the former as my definition of “freedom”.
cf https://wiki.lesswrong.com/wiki/Other-optimizing
Yes. Even if what I actually want is “freedom to do the optimal thing”, it is strategically better to fight for “freedom to do the arbitrary thing”. The latter allows me to do the former. But if we only have the freedom to do the optimal thing, and the people with power disagree with me about what is optimal, I get neither.
But how do the two things in the last paragraph mix if I have (1) a preference for others to judge me well, (2) a belief that others will judge me well if they believe I am doing what they believe is optimal for what they think my beliefs and preferences should be, and (3) a belief that the extrapolated cost of convincing them that I am doing such a thing without actually doing the thing is so incredibly high as to make plans involving that almost never show up in decision-making processes?
Put another way, it seems like the two definitions can collapse in a sufficiently low-privacy conformist environment—which can be unified with the emotion of “freedom”, but at least in most Western contexts, that seems infrequent. The impression I get is that most people obvious-patch around this by trying to extrapolate “what a version of me completely removed from peer pressures would prefer” and using that as the preference baseline, but I both think and feel that that’s incoherent. (Further meta, I also get the impression that many people don’t feel that it’s incoherent even if they would agree cognitively that it is, and that that leads to a lot of worldmodel divergence down the line.)
(I realize this might be a bit off-track from its parent comment, but I think it’s relevant to the broader discussion.)