Downvoted, for being ridiculously one-sided and failing to engage with any objections or different modeling. Just for part of it:
We value triumphing over scarcity to achieve prosperity.
We value triumphing over disaster to achieve safety.
We value triumphing over stagnation to achieve vitality.
We value triumphing over conflict to achieve harmony.
Who’s this “we” you speak of? Why do you believe that this is universal, especially on the fine-grained tactical behavior level (it’s probably pretty common in far-mode theory)? Where do you put things like “we value being treated with dignity by others” and “we value social status over others”?
Most importantly, what do you do with people who do not have the same priorities as you? This post seems to be mostly on the mistake theory side of https://www.lesswrong.com/tag/conflict-vs-mistake, but doesn’t contain any evidence nor recommendations for when it’s not working.
I’m OK with 3 out of 4, but I have serious issues with this:
We value triumphing over stagnation to achieve vitality.
I don’t think this is a univeral value at all. This looks like valuing change as a fundamental good, and I certainly don’t do this- quite the reverse. All other things being equal I’d much rather things stayed the same. Obviously I’d like bad things to change to good things, but that seems to be covered by the other three virtues. Stagnation, all other things being equal, is a good thing.
That’s a fair point. I should elaborate on the concept of stagnation, to avoid giving people the wrong impression about it.
Stagnation is the fundamental liability defined by predictable limitations on people’s motivations.
Like the other liabilities, stagnation is also an intrinsic aspect of conscious existence as we know it. Predictable motivations are what allow us to have identity, as individuals and as groups. Identity and stagnation are two sides of the same coin—stagnation is just what we call it when it interferes with what we otherwise want.
Our identities should not become prisons, not only because that prevents us from dealing with other liabilities but also because part of being conscious is not knowing everything about ourselves. Choice is another aspect of consciousness, the flip side of conflict, defined by what we don’t already know about our motivations. Part of our existence is not always being able to predict which goal will triumph over other goals, either within a person or between different people.
In short, it seems to me that we should make sure we never lose the ability to surprise ourselves. When we know everything about what we will want in the future, then we lose an important part of what makes us conscious beings. Does that make more sense?
I appreciate your questions and will do my best to clarify.
The values from the section you quoted pertain to civilization as a whole. You are correct that individual motivations/desires/ambitions require other concepts to describe them (see below). I apologize for not making that clear. The “universal values” are instrumental values in a sense, because they describe a civilization in which individuals are more able to pursue their own personal motivations (the terminal values, more or less) without getting stuck.
In other words, the “universal values of civilization” just mean the opposites of the fundamental liabilities. We could put a rationalist taboo on the “values” and simply say, “all civilizations want scarcity, disaster, stagnation, and conflict to not obstruct people’s goals.” They just lose sight of that big-picture vision when they layer a bunch of lower-level instrumental values on top of it. (And to be fair, those layers of values are usually more concrete and immediately practical than “make the liabilities stop interfering with what we want”. It’s just that losing sight of the big picture prevents humanity from making serious efforts to solve big-picture problems.)
Valuing being treated with dignity would typically go under the motivation of idealization, while valuing social status over others could fall under idealization, acquisition, or control. (It’s possible for different people to want the same thing for different reasons. Knowing their motivations helps us predict what other things they will probably also want.)
As for what we can do when people have different priorities, I attempted to explain that in the part describing ethics, and included an example (the neighbors and the trombone). Was there some aspect of that explanation that was unclear or otherwise unsatisfactory? It might be necessary for me to clarify that even though my example was on the level of individuals, the principles of ethics also pertain to conflict on the policy level. I chose an individual example because I wanted to illustrate pure ethics, and most policy conflicts involve other liabilities, which I predicted would confuse people. Does that make more sense?
(Your utopia isn’t here because it’s only easy in hindsight.)
Downvoted, for being ridiculously one-sided and failing to engage with any objections or different modeling. Just for part of it:
Who’s this “we” you speak of? Why do you believe that this is universal, especially on the fine-grained tactical behavior level (it’s probably pretty common in far-mode theory)? Where do you put things like “we value being treated with dignity by others” and “we value social status over others”?
Most importantly, what do you do with people who do not have the same priorities as you? This post seems to be mostly on the mistake theory side of https://www.lesswrong.com/tag/conflict-vs-mistake, but doesn’t contain any evidence nor recommendations for when it’s not working.
tl;dr: if it’s so easy, where’s my utopia?
I’m OK with 3 out of 4, but I have serious issues with this:
I don’t think this is a univeral value at all. This looks like valuing change as a fundamental good, and I certainly don’t do this- quite the reverse. All other things being equal I’d much rather things stayed the same. Obviously I’d like bad things to change to good things, but that seems to be covered by the other three virtues. Stagnation, all other things being equal, is a good thing.
That’s a fair point. I should elaborate on the concept of stagnation, to avoid giving people the wrong impression about it.
Stagnation is the fundamental liability defined by predictable limitations on people’s motivations.
Like the other liabilities, stagnation is also an intrinsic aspect of conscious existence as we know it. Predictable motivations are what allow us to have identity, as individuals and as groups. Identity and stagnation are two sides of the same coin—stagnation is just what we call it when it interferes with what we otherwise want.
Our identities should not become prisons, not only because that prevents us from dealing with other liabilities but also because part of being conscious is not knowing everything about ourselves. Choice is another aspect of consciousness, the flip side of conflict, defined by what we don’t already know about our motivations. Part of our existence is not always being able to predict which goal will triumph over other goals, either within a person or between different people.
In short, it seems to me that we should make sure we never lose the ability to surprise ourselves. When we know everything about what we will want in the future, then we lose an important part of what makes us conscious beings. Does that make more sense?
I appreciate your questions and will do my best to clarify.
The values from the section you quoted pertain to civilization as a whole. You are correct that individual motivations/desires/ambitions require other concepts to describe them (see below). I apologize for not making that clear. The “universal values” are instrumental values in a sense, because they describe a civilization in which individuals are more able to pursue their own personal motivations (the terminal values, more or less) without getting stuck.
In other words, the “universal values of civilization” just mean the opposites of the fundamental liabilities. We could put a rationalist taboo on the “values” and simply say, “all civilizations want scarcity, disaster, stagnation, and conflict to not obstruct people’s goals.” They just lose sight of that big-picture vision when they layer a bunch of lower-level instrumental values on top of it. (And to be fair, those layers of values are usually more concrete and immediately practical than “make the liabilities stop interfering with what we want”. It’s just that losing sight of the big picture prevents humanity from making serious efforts to solve big-picture problems.)
The concepts describing the individual motivations are enumerated in this comment, which for brevity’s sake I will link rather than copying: https://www.lesswrong.com/posts/BLddiDeE6e9ePJEEu/the-village-and-the-river-monsters-or-less-fighting-more?commentId=T7SF6wboFdKBeuoZz. (As a heads up, my use of the word “values” lumps different classes of concept together (motivations, opposites of liabilities, tradeoffs, and constructive principles). I apologize if that lumping makes things unclear; I can clarify if need be.
Valuing being treated with dignity would typically go under the motivation of idealization, while valuing social status over others could fall under idealization, acquisition, or control. (It’s possible for different people to want the same thing for different reasons. Knowing their motivations helps us predict what other things they will probably also want.)
As for what we can do when people have different priorities, I attempted to explain that in the part describing ethics, and included an example (the neighbors and the trombone). Was there some aspect of that explanation that was unclear or otherwise unsatisfactory? It might be necessary for me to clarify that even though my example was on the level of individuals, the principles of ethics also pertain to conflict on the policy level. I chose an individual example because I wanted to illustrate pure ethics, and most policy conflicts involve other liabilities, which I predicted would confuse people. Does that make more sense?
(Your utopia isn’t here because it’s only easy in hindsight.)