I think political differences come down to values moreso than beliefs about facts.
Sometimes it is difficult to find out what is the different value and what is essentially the same value but different models.
For example two people can have a value of “it would be bad to destroy humanity”, but one of them has a model that humanity will likely destroy itself with ongoing capitalism, while the other has a model that humanity would be likely destroyed by some totalitarian movement like communism.
But instead of openly discussing their models and finding the difference, the former will accuse the latter of not caring about human suffering, and the latter will accuse the former of not caring about human suffering. Or they will focus on different applause lights, just to emphasise how different they are.
I probably underestimate the difference of values. Some people are psychopaths; and they might not be the only different group of people. But it seems to me that a lot of political mindkilling is connected with overestimating the difference, instead of admitting that our values in connection with a different model of the world would lead to different decisions. (Because our values are good, the different decisions are evil, and good cannot be evil, right?)
Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
(And more generally: If you meet a person with strange political opinions, try to imagine a least convenient world, where your values would lead to the same opinions. Even if that would be a wrong model of our world, it still may be the model the other person believes to be correct.)
Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don’t find them to map too closely to actual situations.
I’m not sure I can aptly articulate by intuition here. By differences in values, I don’t really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason.
But I’ll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.
Sometimes it is difficult to find out what is the different value and what is essentially the same value but different models.
For example two people can have a value of “it would be bad to destroy humanity”, but one of them has a model that humanity will likely destroy itself with ongoing capitalism, while the other has a model that humanity would be likely destroyed by some totalitarian movement like communism.
But instead of openly discussing their models and finding the difference, the former will accuse the latter of not caring about human suffering, and the latter will accuse the former of not caring about human suffering. Or they will focus on different applause lights, just to emphasise how different they are.
I probably underestimate the difference of values. Some people are psychopaths; and they might not be the only different group of people. But it seems to me that a lot of political mindkilling is connected with overestimating the difference, instead of admitting that our values in connection with a different model of the world would lead to different decisions. (Because our values are good, the different decisions are evil, and good cannot be evil, right?)
Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
(And more generally: If you meet a person with strange political opinions, try to imagine a least convenient world, where your values would lead to the same opinions. Even if that would be a wrong model of our world, it still may be the model the other person believes to be correct.)
I agree, though I’ll add that what facts people find plausible are shaped by their values.
Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don’t find them to map too closely to actual situations.
I’m not sure I can aptly articulate by intuition here. By differences in values, I don’t really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason.
But I’ll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.