Note that I think we’re talking about similar things, but have slightly different framing. For instance, you say :
I’ve had similar thoughts but formulated them a bit differently. It seems to me that most people have the same bedrock values, like “pain is bad”. Some moral disagreements are based on conflicts of interest, but most are importance disagreements instead. Basically people argue like “X! - No, Y!” when X and Y are both true, but they disagree on which is more important, all the while imagining that they’re arguing about facts. You can see it over and over on the internet.
I think “Value Importance” disagreements definitely do happen, and Ruby talks about them in “The Rock and the Hard Place”.
However, I’m also trying to point at “Fact Importance” as a thing that people often assume away when trying to model each other. I’d even go as far to say that often what seems like “value importance” intractable debates are often “hidden assumption fact importance debates”.
For instance, we might both have the belief that signalling effects peoples’ behaviors, and the belief that people are trying to achieve happiness, and we both assign moderately high probability on each of these factors. However, unless I understand, in their world model, how MUCH they think signalling effects behaviors in comparison to seeking happiness, I’ve probably just unknowingly imported my own importance weights onto those items.
Any time you’re using heuristics (which most good thinkers are) its’ important to go up and model the meta-heuristics that allow you to choose how much a given heuristic effects a given situation.
Yeah, I guess I wasn’t separating these things. A belief like “capitalists take X% of the value created by workers” can feel important both for its moral urgency and for its explanatory power—in politics that’s pretty typical.
Just wanted to quickly assert strongly that I wouldn’t characterize my post cited above as being only about value disagreements (value disagreements might even be a minority of applicable cases).
Consider Alice and Bob who are aligned on the value of not dying. They are arguing heatedly over whether to stay where they are vs run into the forest.
Alice: “If we stay here the axe murderer will catch us!”
Bob: “If we go into the forest the wolves will eat us!!”
Alice: “But don’t you see, the axe murderer is nearly here!!!”
Same value, still a rock and hard place situation.
Similarly, we might both agree on the meta-heuristics in a specific situation, but I have models that apply a heuristic to 50x the situations that you do, so even though you agree that the heuristic is true, you disagree on how important it is because you don’t have the models to apply it to all the situations that I can.
Note that I think we’re talking about similar things, but have slightly different framing. For instance, you say :
I think “Value Importance” disagreements definitely do happen, and Ruby talks about them in “The Rock and the Hard Place”.
However, I’m also trying to point at “Fact Importance” as a thing that people often assume away when trying to model each other. I’d even go as far to say that often what seems like “value importance” intractable debates are often “hidden assumption fact importance debates”.
For instance, we might both have the belief that signalling effects peoples’ behaviors, and the belief that people are trying to achieve happiness, and we both assign moderately high probability on each of these factors. However, unless I understand, in their world model, how MUCH they think signalling effects behaviors in comparison to seeking happiness, I’ve probably just unknowingly imported my own importance weights onto those items.
Any time you’re using heuristics (which most good thinkers are) its’ important to go up and model the meta-heuristics that allow you to choose how much a given heuristic effects a given situation.
Yeah, I guess I wasn’t separating these things. A belief like “capitalists take X% of the value created by workers” can feel important both for its moral urgency and for its explanatory power—in politics that’s pretty typical.
Depends on the value of X.
Just wanted to quickly assert strongly that I wouldn’t characterize my post cited above as being only about value disagreements (value disagreements might even be a minority of applicable cases).
Consider Alice and Bob who are aligned on the value of not dying. They are arguing heatedly over whether to stay where they are vs run into the forest.
Same value, still a rock and hard place situation.
Similarly, we might both agree on the meta-heuristics in a specific situation, but I have models that apply a heuristic to 50x the situations that you do, so even though you agree that the heuristic is true, you disagree on how important it is because you don’t have the models to apply it to all the situations that I can.