Not all human politics is low-hanging fruit, to be sure. I was thinking of issues like the economy, healthcare, education, and the environment. It seems like there are some obvious win-win improvements we can make in those contexts if we just shift the discussion in a constructive direction. We can show people there are more ideas for solutions than just the ones they’ve been arguing about.
It is true that the process shown in this story is not sufficient to dismantle religion. Such an undertaking requires a constructive meta-culture with which to replace religion. As it happens, I’ve got a basis for one of those now, but humans will have to fill in the specifics to suit their own needs and styles. (A constructive meta-culture must address the fundamental liabilities of scarcity, disaster, stagnation, and conflict using the four constructive principles of investment, preparation, transcension, and ethics. How it does that depends on the physical and social context and on the choices of the society.)
The trick to effective communication is to start out by identifying what people care about. This step is easy enough with a toolbox of basic concepts for describing value. The next step is to find the early adopters, the first ones who are willing to listen. They can influence the people ideologically adjacent to them, who can influence the people adjacent to them, et cetera.
By contrast, if we don’t reach out to people and try to communicate with them, there are limitations on how much society can improve, especially if you are averse to conquest.
For this reason, I conclude that facilitating communication about values and solutions seems to be the single best use of my time. Whatever low-hanging fruit exists in other fields, it will all run into a limiting factor based on human stagnation and conflict. I don’t know if you have an extraordinary effort, but this one is mine. I make it so that the effort doesn’t have to be nearly so extraordinary for other people.
So far as I can tell, the tools I’ve accumulated for this endeavor appear to be helping the people around me a great deal. The more I learn about connecting with people across different paradigms, the easier it gets. It starts with expressing as simply as possible what matters most. It turns out there is a finite number of concepts that describe what people care about.
There’s a lot more I’ve been up to than what you see here; I just haven’t spent much time posting on LessWrong because most people here don’t seem to consider it important or feasible to introduce other people to new paradigms for solving problems.
Is there another approach to making the world a better place without changing how humans think, that I’m unaware of?
I was thinking of issues like the economy, healthcare, education, and the environment.
I disagree and will call any national or global political issues high-hanging fruit. I believe there is low-hanging fruit at the local level, but coordination problems of million or more people are hard.
They can influence the people ideologically adjacent to them, who can influence the people adjacent to them, et cetera.
In my experience, it’s not clear that there is really much “proper adjacency.” Sufficiently high dimensional spaces make any sort of clustering ambiguous and messy if even possible. Even more specifically, I haven’t seen much of any ideas in politics that spread quickly that wasn’t also coordinated from (near) the top, suggesting to me that information cascades in this domain are impractical.
I think that largely that’s what is even meant by hierarchical structures. Small/low elements have potentially rich, complicated inner lives, but have very little signal they can send upwards/outwards. High/large structures have potentially bureaucratically or legally constrained action space, but their actions have wide and potentially large influences.
So far as I can tell, the tools I’ve accumulated for this endeavor appear to be helping the people around me a great deal.
It starts with expressing as simply as possible what matters most. It turns out there is a finite number of concepts that describe what people care about.
Say there are 100 fundamental desires, and all desires stem from these 100 fundamental desires. Each can still take on any number from −1 to 1, allowing a person to care about each of these things in different proportions. Even if we restrict the values to 0 to 1, you still get conflict because what is most important to one person is not what’s most important to another, causing real value divergences.
Is there another approach to making the world a better place without changing how humans think, that I’m unaware of?
I can think of some that you didn’t explicitly mention.
You can make the world just a slightly better place by normal means, trying to be kind, etc.
You can have kids, and teach them a better way to think while they’re still especially pliable, and ignore trying to teach old dogs new tricks
Maximize your inclusive genetic fitness, live a long life, make sure your ideas are such good ones that your kids will teach their kids and eventually outlive and out-compete inferior ideas
You can change how humans think, but you can do it in not the domain of politics
For what it’s worth, I also largely agree with things you said and your original post. At the point where the Wanderer contributed, I guessed both how the story would end, and the worse compromise the Wanderer mentioned. I guess I especially agree with your target. It’s not clear to me that I agree with your methods after having spent a fair deal of time on this sort of problem myself. That said, it’s extremely likely that you have real skill advantages in this domain over me. That said, I think any premise that begins with “the economy, healthcare, education, and the environment are low-hanging fruit in politics” is one where you get burned and waste time.
As you say, the ability to coordinate large-scale action by decree requires a high place in a hierarchy. With the internet, though, it doesn’t take authority just to spread an idea, as long it’s one that people find valuable or otherwise really like. I’m not sure why adjacency has to be “proper”; I’m just talking about social networks, where people can be part of multiple groups and transmit ideas and opinions between them.
Regarding value divergence: Yes, there is conflict because of how people prioritize desires and values differently. However, it would be a huge step forward to get people to see that it is merely their priorities that are different, rather than their fundamental desires and values. It would be a further huge step forward for them to realize that if they work together and let go of some highly specific expectations of how those desires and values are to be fulfilled (which they will at least sometimes be willing to do), they can accomplish enormous mutual benefit. This approach is not going to be perfect, but it will be much better than what we have now because it will keep things moving forward instead of getting stuck.
Your suggestions are indeed ways to make the world a better place. They’re just not quite fast enough or high-impact enough for my standards. Being unimpressed with human philosophy, I figured that there could easily be some good answers that humans hadn’t found because they were too wrapped up in the ones they already had. Therefore, I decided to seek something faster and more effective, and over the years I’ve found some very useful approaches.
When I say a field is “low-hanging fruit”, it’s because I think that there are clear principles that humans can apply to make large improvements in that field, and that the only reason they haven’t done so is they are too confused and distracted (for various reasons) to see the simplicity of those principles underneath all the miscellaneous gimmicks and complex literature.
The approach I took was to construct a vocabulary of foundational building-block concepts, so that people can keep a focus on the critical aspects of a problem and, to borrow from Einstein, make everything as simple as possible, but no simpler.
There’s tremendous untapped potential in human society as a whole, and the reason it is untapped is because humans don’t know how to communicate with each other about what matters. All they need is a vocabulary for describing goals, the problems they face in reaching those goals, and the skills they need to overcome those problems. I’m not knowledgeable enough or skilled enough to solve all of humanity’s problems—but humanity is, once individual humans can work together effectively. My plan is simply to enable them to do that.
I understand that most people assume it’s not possible because they’ve never seen it done and are used to writing off humans (individually and collectively) as hopeless. Perhaps I should dig through the World Optimization topics to see if there’s anyone in this community who recognizes the potential of facilitating communication.
In any case, I appreciate your engagement on this topic, and I’m glad you enjoyed the story enough to comment. If you do decide to explore new options for communication, I’ll be around.
I agree with your comments mostly so far. There is low-hanging fruit even in complex areas, regardless of the prevailing cynicism.
I understand that most people assume it’s not possible because they’ve never seen it done and are used to writing off humans (individually and collectively) as hopeless.
There does seem to be a lot of folks who match that description.
But there are also folks who understand that the world can get better yet nonetheless act like crabs in a bucket due to their desires. The latter group, when they exist in numbers past a certain threshold, likely increase the height of the fruit.
I don’t think most people are consciously aware, but I think most people are unconsciously aware that “it is merely their priorities that are different, rather than their fundamental desires and values” and furthermore our society largely looks structured such that only the priorities are different, but that the priorities differ significantly enough because of the human-sparseness of value-space.
I’m not sure why adjacency has to be “proper”; I’m just talking about social networks, where people can be part of multiple groups and transmit ideas and opinions between them.
I approximately mean something as follows:
Take the vector-value model I described previously. Consider some distance metric (such as the L2 norm), D(a, b) where a and b are humans/points in value-space (or mind-space, where a mind can “reject” an idea by having it be insufficiently compatible). Let k be some threshold for communicability of a particular idea. Assume once an idea is communicated, it is communicated in full-fidelity (you can replace this with a probabilistic or imperfect communication model, but it’s not necessary to illustrate my point). If you create the graph amongst all humans in value-space, where an edge exists between a and b iff D(a,b) < k, it’s not clear to me that this graph is connected, or even has many edges at all. If this is true for a particular idea/k pair, then the idea is unlikely to undergo information cascade, because additional effort is needed in many locations to cross the inferential gap.
As you say, the ability to coordinate large-scale action by decree requires a high place in a hierarchy. With the internet, though, it doesn’t take authority just to spread an idea, as long it’s one that people find valuable or otherwise really like.
Somewhat related, somewhat tangential, I think the internet itself is organized hierarchically as nested “echo-chambers” or something similar where the smallest echo chambers are what we currently call echo-chambers. This means you can translate any idea/concept as existing somewhere on the hierarchy of internet communities, and only ideas high on the hierarchy can effectively spread messages/information cascades widely.
Is there anywhere you can concretely point to in my model(s) you would disagree with?
if there’s anyone in this community who recognizes the potential of facilitating communication.
I agree this is (potentially) high leverage. My strategy has general been that expressing ideas with greater precision more greatly aids communication. An arbitrary conversation is unlikely to transmit the full precision of your idea, but it becomes less likely that you transmit something you don’t mean and that makes a huge difference. The domain of politics seems mostly littered with extremely low precision communication, and in particular, often deceptively precise communication, wherein wording is chosen between two concepts to allow any error correction of behalf of a listener to be in favor of the communicator. Is there any reason why you want to specifically target politics instead of generally trying to make the human race more sane, such as what Yudkowsky did with the sequences?
Say there are 100 fundamental desires, and all desires stem from these 100 fundamental desires. Each can still take on any number from −1 to 1, allowing a person to care about each of these things in different proportions. Even if we restrict the values to 0 to 1, you still get conflict because what is most important to one person is not what’s most important to another, causing real value divergences.
There’s almost certainly less than 100 fundamental desires, in fact almost certainly less than 10.
If there’s 10, and if there are 10 recognizable gradations for each desire. that’s only 10^10, 10 billion permutations.
More likely there’s only 3 or 4, but more gradations, say 50. so 50^3 to 50^4 permutations. Which is not a lot, it almost guarantees that more than a 1000 people on Earth have a nearly identical set of fundamental desires for any possible combination.
I count eight fundamental desires, but they can take countless forms based on context. For example, celebration might lead one person to seek out a certain type of food, while leading another person to regularly go jogging. It’s the same motivation, but manifesting for two different stimuli.
Here are the eight fundamental desires:
Celebration, the desire to bring more of something into one’s experience
Acquisition, the desire to bring more of something into one’s influence
Insulation, the desire to push something out of one’s experience
Relaxation, the desire to push something out of one’s influence
Curiosity, the desire for unpredictable experience
Boldness, the desire for unpredictable influence
Idealization, the desire for more predictable experience
Control, the desire for more predictable influence
The four fundamental liabilities can impede us from fulfilling our desires, so people often respond by developing instrumental values, which make it easier to fulfill desires. Some of these values are tradeoffs, but others are more constructive. Values inform a society’s public policy.
For the liability of scarcity, the tradeoffs are wastefulness and austerity, and the constructive value is investment.
For the liability of disaster, the tradeoffs are negligence and susceptibility, and the constructive value is preparation.
For the liability of stagnation, the tradeoffs are decadence and dogma, and the constructive value is transcension.
For the liability of conflict, the tradeoffs are turmoil and corruption, and the constructive value is ethics.
Identical desires would not automatically lead to harmony if people want the same thing and start fighting over it. Identical values might help, if it means people support the same policies for society.
Using ethics to reconcile conflict is not a trivial set of goals, but it makes it much more possible for people to establish mutual trust and cooperation even if they can’t all get everything they want. By working together, they will likely find they can get something just as satisfactory as what they originally had in mind. That’s a society that people can feel good about living in.
That’s a valid way to look at it. I used to use three axes for them: increase versus decrease, experience versus influence, and average versus variance (or “quantity versus quality”).
I typically just go with the eight desires described above, which I call “motivations”. It’s partially for thematic reasons, but also to emphasize that they are not mutually exclusive, even within the same context.
It is perfectly possible to be both boldness-responsive and control-responsive: seeking to accomplish unprecedented things and expecting to achieve them without interference or difficulty. That’s simultaneously breaking and imposing limits through one’s influence.
Likewise, it’s possible to be both acquisition-responsive and relaxation-responsive: seeking power over a larger dominion without wanting to constantly work to maintain that power.
They’re not scalars, either—curiosity about one topic does not always carry over to other topics. There’s a lot of nuance in motivation, but having concepts that form a basis for motivation-space helps.
These motivations are not goals in and of themselves, but they help us describe what sorts of goals people are likely to adopt. You could call them meta-goals. It’s a vocabulary for talking about what people care about and what they want out of life. I suppose it’s part of the basis for my understanding of Fun Theory.
It is perfectly possible to be both boldness-responsive and control-responsive: seeking to accomplish unprecedented things and expecting to achieve them without interference or difficulty. That’s simultaneously breaking and imposing limits through one’s influence.
Likewise, it’s possible to be both acquisition-responsive and relaxation-responsive: seeking power over a larger dominion without wanting to constantly work to maintain that power.
It’s certainly possible for people to have these conflicting desires in their mind. Though I don’t see how that translates to observed desires?
Since reality must obey physical principles. (Though purely internal desires are of course relevant to the person experiencing it, the desires must be demonstrable and observable for anyone else to take it into consideration, otherwise the presumption will be that it’s made up.)
For a real world example, no amount of effort or desire can make a river go uphill and downhill simultaneously.
Someone may ‘seek to accomplish the unprecedented’ of making the river do so and ‘expect to achieve this without interference or difficulty’ but it would be so unusual an activity that a prank would be the likely first guess.
Even if they spent real resources on the river, it will just look like how you would expect it flowing downhill, or flowing uphill with a pumping system if they’re really motivated, or stagnant if perfectly level.
They could rapidly change the flow direction back and forth to try to demonstrate their desires, and simultaneously verbally claim it’s effortless, easy-as-pie, etc., and that the river’s really going both ways at once.
But this would just look like a convoluted prank to a random observer.
I’m not even sure how such a conflicting desire could be credibly demonstrated.
Maybe if they are willing to take bets that the river will in fact go uphill and downhill simultaneously, and since so it’s so effortless they’re willing to bet their life savings, home, first born, and so on? (Though it would practically be reducing themselves to penury, since there’s a 100% chance of losing the bet.)
For a physically possible but very unlikely and completely impractical desire, maybe someone has the desire to build a triple decker train wagon since they’re a train enthusiast.
How could they credibly demonstrate ‘seeking to accomplish the unprecedented triple decker wagon and expecting to achieve the built wagon without interference or difficulty.’ ?
I am skeptical of psychology research in general, but my cursory exploration has suggested to me that it is potentially reasonable to think there are 16. My best estimates are probably that there literally are 100 or more, but that most of those dimension largely don’t have big variance/recognizable gradations/are lost in noise. I think humans are reasonably good at detecting 1 part in 20, and that the 16 estimate above is a reasonable ballpark, meaning I believe that 20^16=6.5E20 is a good approximation of the number of states in the discretized value space. With less than 1E10 humans, this would predict very few exact collisions.
I would be really dubious of any models that suggest there are less than 5. Do you have any candidates for systems of 3 or 4 fundamental desires?
That covers all known activities directly, or with only one layer of abstraction in the case of ceremonies, fights, etc., for hunter-gatherers up until the invention of agriculture.
I see. I feel like honor/idealism/order/control/independence don’t cleanly decompose to these four even with a layer of abstraction, but your list was more plausible than I was expecting.
That said, I think an arbitrary inter-person interaction with respect to these desires is pretty much guaranteed to be zero or negative sum, as they all depend on limited resources. So I’m not sure what aligning on the values would mean in terms of helping cooperation.
Avoiding death and exploration are usually considered positive sum, at least intra-tribe.
Social standing relative to other tribe members is of course always zero sum by definition.
Reproduction is a mix usually, if babies are presumed to be literally born equal then it’s zero sum when the population is at the maximum limit of the local environment’s carrying capacity. Otherwise it can be positive or negative.
If I discover something first, our current culture doesn’t assign much value to the second person finding it, is why I mentioned exploration as not-positive sum. Avoiding death literally requires free energy, a limited resource, but I realize that’s an oversimplification at the scale we’re talking.
Not all human politics is low-hanging fruit, to be sure. I was thinking of issues like the economy, healthcare, education, and the environment. It seems like there are some obvious win-win improvements we can make in those contexts if we just shift the discussion in a constructive direction. We can show people there are more ideas for solutions than just the ones they’ve been arguing about.
It is true that the process shown in this story is not sufficient to dismantle religion. Such an undertaking requires a constructive meta-culture with which to replace religion. As it happens, I’ve got a basis for one of those now, but humans will have to fill in the specifics to suit their own needs and styles. (A constructive meta-culture must address the fundamental liabilities of scarcity, disaster, stagnation, and conflict using the four constructive principles of investment, preparation, transcension, and ethics. How it does that depends on the physical and social context and on the choices of the society.)
The trick to effective communication is to start out by identifying what people care about. This step is easy enough with a toolbox of basic concepts for describing value. The next step is to find the early adopters, the first ones who are willing to listen. They can influence the people ideologically adjacent to them, who can influence the people adjacent to them, et cetera.
By contrast, if we don’t reach out to people and try to communicate with them, there are limitations on how much society can improve, especially if you are averse to conquest.
For this reason, I conclude that facilitating communication about values and solutions seems to be the single best use of my time. Whatever low-hanging fruit exists in other fields, it will all run into a limiting factor based on human stagnation and conflict. I don’t know if you have an extraordinary effort, but this one is mine. I make it so that the effort doesn’t have to be nearly so extraordinary for other people.
So far as I can tell, the tools I’ve accumulated for this endeavor appear to be helping the people around me a great deal. The more I learn about connecting with people across different paradigms, the easier it gets. It starts with expressing as simply as possible what matters most. It turns out there is a finite number of concepts that describe what people care about.
There’s a lot more I’ve been up to than what you see here; I just haven’t spent much time posting on LessWrong because most people here don’t seem to consider it important or feasible to introduce other people to new paradigms for solving problems.
Is there another approach to making the world a better place without changing how humans think, that I’m unaware of?
I disagree and will call any national or global political issues high-hanging fruit. I believe there is low-hanging fruit at the local level, but coordination problems of million or more people are hard.
In my experience, it’s not clear that there is really much “proper adjacency.” Sufficiently high dimensional spaces make any sort of clustering ambiguous and messy if even possible. Even more specifically, I haven’t seen much of any ideas in politics that spread quickly that wasn’t also coordinated from (near) the top, suggesting to me that information cascades in this domain are impractical.
I think that largely that’s what is even meant by hierarchical structures. Small/low elements have potentially rich, complicated inner lives, but have very little signal they can send upwards/outwards. High/large structures have potentially bureaucratically or legally constrained action space, but their actions have wide and potentially large influences.
Great. Keep on doing it, then.
My message for everyone else.
Say there are 100 fundamental desires, and all desires stem from these 100 fundamental desires. Each can still take on any number from −1 to 1, allowing a person to care about each of these things in different proportions. Even if we restrict the values to 0 to 1, you still get conflict because what is most important to one person is not what’s most important to another, causing real value divergences.
I can think of some that you didn’t explicitly mention.
You can make the world just a slightly better place by normal means, trying to be kind, etc.
You can have kids, and teach them a better way to think while they’re still especially pliable, and ignore trying to teach old dogs new tricks
Maximize your inclusive genetic fitness, live a long life, make sure your ideas are such good ones that your kids will teach their kids and eventually outlive and out-compete inferior ideas
You can change how humans think, but you can do it in not the domain of politics
For what it’s worth, I also largely agree with things you said and your original post. At the point where the Wanderer contributed, I guessed both how the story would end, and the worse compromise the Wanderer mentioned. I guess I especially agree with your target. It’s not clear to me that I agree with your methods after having spent a fair deal of time on this sort of problem myself. That said, it’s extremely likely that you have real skill advantages in this domain over me. That said, I think any premise that begins with “the economy, healthcare, education, and the environment are low-hanging fruit in politics” is one where you get burned and waste time.
As you say, the ability to coordinate large-scale action by decree requires a high place in a hierarchy. With the internet, though, it doesn’t take authority just to spread an idea, as long it’s one that people find valuable or otherwise really like. I’m not sure why adjacency has to be “proper”; I’m just talking about social networks, where people can be part of multiple groups and transmit ideas and opinions between them.
Regarding value divergence: Yes, there is conflict because of how people prioritize desires and values differently. However, it would be a huge step forward to get people to see that it is merely their priorities that are different, rather than their fundamental desires and values. It would be a further huge step forward for them to realize that if they work together and let go of some highly specific expectations of how those desires and values are to be fulfilled (which they will at least sometimes be willing to do), they can accomplish enormous mutual benefit. This approach is not going to be perfect, but it will be much better than what we have now because it will keep things moving forward instead of getting stuck.
Your suggestions are indeed ways to make the world a better place. They’re just not quite fast enough or high-impact enough for my standards. Being unimpressed with human philosophy, I figured that there could easily be some good answers that humans hadn’t found because they were too wrapped up in the ones they already had. Therefore, I decided to seek something faster and more effective, and over the years I’ve found some very useful approaches.
When I say a field is “low-hanging fruit”, it’s because I think that there are clear principles that humans can apply to make large improvements in that field, and that the only reason they haven’t done so is they are too confused and distracted (for various reasons) to see the simplicity of those principles underneath all the miscellaneous gimmicks and complex literature.
The approach I took was to construct a vocabulary of foundational building-block concepts, so that people can keep a focus on the critical aspects of a problem and, to borrow from Einstein, make everything as simple as possible, but no simpler.
There’s tremendous untapped potential in human society as a whole, and the reason it is untapped is because humans don’t know how to communicate with each other about what matters. All they need is a vocabulary for describing goals, the problems they face in reaching those goals, and the skills they need to overcome those problems. I’m not knowledgeable enough or skilled enough to solve all of humanity’s problems—but humanity is, once individual humans can work together effectively. My plan is simply to enable them to do that.
I understand that most people assume it’s not possible because they’ve never seen it done and are used to writing off humans (individually and collectively) as hopeless. Perhaps I should dig through the World Optimization topics to see if there’s anyone in this community who recognizes the potential of facilitating communication.
In any case, I appreciate your engagement on this topic, and I’m glad you enjoyed the story enough to comment. If you do decide to explore new options for communication, I’ll be around.
I agree with your comments mostly so far. There is low-hanging fruit even in complex areas, regardless of the prevailing cynicism.
There does seem to be a lot of folks who match that description.
But there are also folks who understand that the world can get better yet nonetheless act like crabs in a bucket due to their desires. The latter group, when they exist in numbers past a certain threshold, likely increase the height of the fruit.
I don’t think most people are consciously aware, but I think most people are unconsciously aware that “it is merely their priorities that are different, rather than their fundamental desires and values” and furthermore our society largely looks structured such that only the priorities are different, but that the priorities differ significantly enough because of the human-sparseness of value-space.
I approximately mean something as follows:
Take the vector-value model I described previously. Consider some distance metric (such as the L2 norm), D(a, b) where a and b are humans/points in value-space (or mind-space, where a mind can “reject” an idea by having it be insufficiently compatible). Let k be some threshold for communicability of a particular idea. Assume once an idea is communicated, it is communicated in full-fidelity (you can replace this with a probabilistic or imperfect communication model, but it’s not necessary to illustrate my point). If you create the graph amongst all humans in value-space, where an edge exists between a and b iff D(a,b) < k, it’s not clear to me that this graph is connected, or even has many edges at all. If this is true for a particular idea/k pair, then the idea is unlikely to undergo information cascade, because additional effort is needed in many locations to cross the inferential gap.
Somewhat related, somewhat tangential, I think the internet itself is organized hierarchically as nested “echo-chambers” or something similar where the smallest echo chambers are what we currently call echo-chambers. This means you can translate any idea/concept as existing somewhere on the hierarchy of internet communities, and only ideas high on the hierarchy can effectively spread messages/information cascades widely.
Is there anywhere you can concretely point to in my model(s) you would disagree with?
I agree this is (potentially) high leverage. My strategy has general been that expressing ideas with greater precision more greatly aids communication. An arbitrary conversation is unlikely to transmit the full precision of your idea, but it becomes less likely that you transmit something you don’t mean and that makes a huge difference. The domain of politics seems mostly littered with extremely low precision communication, and in particular, often deceptively precise communication, wherein wording is chosen between two concepts to allow any error correction of behalf of a listener to be in favor of the communicator. Is there any reason why you want to specifically target politics instead of generally trying to make the human race more sane, such as what Yudkowsky did with the sequences?
There’s almost certainly less than 100 fundamental desires, in fact almost certainly less than 10.
If there’s 10, and if there are 10 recognizable gradations for each desire. that’s only 10^10, 10 billion permutations.
More likely there’s only 3 or 4, but more gradations, say 50. so 50^3 to 50^4 permutations. Which is not a lot, it almost guarantees that more than a 1000 people on Earth have a nearly identical set of fundamental desires for any possible combination.
I count eight fundamental desires, but they can take countless forms based on context. For example, celebration might lead one person to seek out a certain type of food, while leading another person to regularly go jogging. It’s the same motivation, but manifesting for two different stimuli.
Here are the eight fundamental desires:
Celebration, the desire to bring more of something into one’s experience
Acquisition, the desire to bring more of something into one’s influence
Insulation, the desire to push something out of one’s experience
Relaxation, the desire to push something out of one’s influence
Curiosity, the desire for unpredictable experience
Boldness, the desire for unpredictable influence
Idealization, the desire for more predictable experience
Control, the desire for more predictable influence
The four fundamental liabilities can impede us from fulfilling our desires, so people often respond by developing instrumental values, which make it easier to fulfill desires. Some of these values are tradeoffs, but others are more constructive. Values inform a society’s public policy.
For the liability of scarcity, the tradeoffs are wastefulness and austerity, and the constructive value is investment.
For the liability of disaster, the tradeoffs are negligence and susceptibility, and the constructive value is preparation.
For the liability of stagnation, the tradeoffs are decadence and dogma, and the constructive value is transcension.
For the liability of conflict, the tradeoffs are turmoil and corruption, and the constructive value is ethics.
Identical desires would not automatically lead to harmony if people want the same thing and start fighting over it. Identical values might help, if it means people support the same policies for society.
Using ethics to reconcile conflict is not a trivial set of goals, but it makes it much more possible for people to establish mutual trust and cooperation even if they can’t all get everything they want. By working together, they will likely find they can get something just as satisfactory as what they originally had in mind. That’s a society that people can feel good about living in.
Does that all make sense?
Seems like your eight desires are 4 fundamental desires with the possibility of increase or decrease.
If there were 50 gradations, then 0 to −25 would signify desires for less, and 0 to +25 would signify desires for more.
That’s a valid way to look at it. I used to use three axes for them: increase versus decrease, experience versus influence, and average versus variance (or “quantity versus quality”).
I typically just go with the eight desires described above, which I call “motivations”. It’s partially for thematic reasons, but also to emphasize that they are not mutually exclusive, even within the same context.
It is perfectly possible to be both boldness-responsive and control-responsive: seeking to accomplish unprecedented things and expecting to achieve them without interference or difficulty. That’s simultaneously breaking and imposing limits through one’s influence.
Likewise, it’s possible to be both acquisition-responsive and relaxation-responsive: seeking power over a larger dominion without wanting to constantly work to maintain that power.
They’re not scalars, either—curiosity about one topic does not always carry over to other topics. There’s a lot of nuance in motivation, but having concepts that form a basis for motivation-space helps.
These motivations are not goals in and of themselves, but they help us describe what sorts of goals people are likely to adopt. You could call them meta-goals. It’s a vocabulary for talking about what people care about and what they want out of life. I suppose it’s part of the basis for my understanding of Fun Theory.
What do you think?
It’s certainly possible for people to have these conflicting desires in their mind. Though I don’t see how that translates to observed desires?
Since reality must obey physical principles. (Though purely internal desires are of course relevant to the person experiencing it, the desires must be demonstrable and observable for anyone else to take it into consideration, otherwise the presumption will be that it’s made up.)
For a real world example, no amount of effort or desire can make a river go uphill and downhill simultaneously.
Someone may ‘seek to accomplish the unprecedented’ of making the river do so and ‘expect to achieve this without interference or difficulty’ but it would be so unusual an activity that a prank would be the likely first guess.
Even if they spent real resources on the river, it will just look like how you would expect it flowing downhill, or flowing uphill with a pumping system if they’re really motivated, or stagnant if perfectly level.
They could rapidly change the flow direction back and forth to try to demonstrate their desires, and simultaneously verbally claim it’s effortless, easy-as-pie, etc., and that the river’s really going both ways at once.
But this would just look like a convoluted prank to a random observer.
I’m not even sure how such a conflicting desire could be credibly demonstrated.
Maybe if they are willing to take bets that the river will in fact go uphill and downhill simultaneously, and since so it’s so effortless they’re willing to bet their life savings, home, first born, and so on? (Though it would practically be reducing themselves to penury, since there’s a 100% chance of losing the bet.)
For a physically possible but very unlikely and completely impractical desire, maybe someone has the desire to build a triple decker train wagon since they’re a train enthusiast.
How could they credibly demonstrate ‘seeking to accomplish the unprecedented triple decker wagon and expecting to achieve the built wagon without interference or difficulty.’ ?
I am skeptical of psychology research in general, but my cursory exploration has suggested to me that it is potentially reasonable to think there are 16. My best estimates are probably that there literally are 100 or more, but that most of those dimension largely don’t have big variance/recognizable gradations/are lost in noise. I think humans are reasonably good at detecting 1 part in 20, and that the 16 estimate above is a reasonable ballpark, meaning I believe that 20^16=6.5E20 is a good approximation of the number of states in the discretized value space. With less than 1E10 humans, this would predict very few exact collisions.
I would be really dubious of any models that suggest there are less than 5. Do you have any candidates for systems of 3 or 4 fundamental desires?
Avoiding death
Reproduction
Social standing
Exploration
That covers all known activities directly, or with only one layer of abstraction in the case of ceremonies, fights, etc., for hunter-gatherers up until the invention of agriculture.
I see. I feel like honor/idealism/order/control/independence don’t cleanly decompose to these four even with a layer of abstraction, but your list was more plausible than I was expecting.
That said, I think an arbitrary inter-person interaction with respect to these desires is pretty much guaranteed to be zero or negative sum, as they all depend on limited resources. So I’m not sure what aligning on the values would mean in terms of helping cooperation.
Avoiding death and exploration are usually considered positive sum, at least intra-tribe.
Social standing relative to other tribe members is of course always zero sum by definition.
Reproduction is a mix usually, if babies are presumed to be literally born equal then it’s zero sum when the population is at the maximum limit of the local environment’s carrying capacity. Otherwise it can be positive or negative.
If I discover something first, our current culture doesn’t assign much value to the second person finding it, is why I mentioned exploration as not-positive sum. Avoiding death literally requires free energy, a limited resource, but I realize that’s an oversimplification at the scale we’re talking.
Less != zero, or negative