It’s in reply to Quinton saying that there should be no masculine and feminine types of rationality. In other words, whether you are a man or a woman should not determine what the correct/rational answer is to a particular question (barring obvious exceptions). This is in stark contrast to asking whether or not political affiliation should be determined by how rational you are, which is another question entirely.
In other words: Just because correct answers to factual questions should not be determined by gender does not mean that political affiliation should not be determined by correct answers to factual questions.
I think political differences come down to values moreso than beliefs about facts.
Sometimes it is difficult to find out what is the different value and what is essentially the same value but different models.
For example two people can have a value of “it would be bad to destroy humanity”, but one of them has a model that humanity will likely destroy itself with ongoing capitalism, while the other has a model that humanity would be likely destroyed by some totalitarian movement like communism.
But instead of openly discussing their models and finding the difference, the former will accuse the latter of not caring about human suffering, and the latter will accuse the former of not caring about human suffering. Or they will focus on different applause lights, just to emphasise how different they are.
I probably underestimate the difference of values. Some people are psychopaths; and they might not be the only different group of people. But it seems to me that a lot of political mindkilling is connected with overestimating the difference, instead of admitting that our values in connection with a different model of the world would lead to different decisions. (Because our values are good, the different decisions are evil, and good cannot be evil, right?)
Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
(And more generally: If you meet a person with strange political opinions, try to imagine a least convenient world, where your values would lead to the same opinions. Even if that would be a wrong model of our world, it still may be the model the other person believes to be correct.)
Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don’t find them to map too closely to actual situations.
I’m not sure I can aptly articulate by intuition here. By differences in values, I don’t really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason.
But I’ll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.
I’m always a little suspicious of this line of thinking. Partly because the terminal/instrumental value division isn’t very clean in humans—since more deeply ingrained values are harder to break regardless of their centrality, and we don’t have very good introspective access to value relationships, it’s remarkably difficult to unambiguously nail down any terminal values in real people. Never mind figuring out where they differ. But more importantly, it’s just too convenient: if you and your political enemies have different fundamental values, you’ve just managed to absolve yourself of any responsibility for argument. That’s not connotationally the same as saying the people you disagree with are all evil mutants or hapless dupes, but it’s functionally pretty damn close.
That doesn’t prove it wrong, of course, but I do think it’s grounds for caution.
How about different factions (landowners, truck drivers, soldiers, immigrants, etc.) all advocating their own interests? Doesn’t that count as “different values”?
Or, more simply, I value myself and my family, you value yourself and your family, so we have dufferent values. Ideologies are just a more general and complicated form.
Well, it depends what you mean by values. I was mainly discussing Randy_M’s comment that rationalism doesn’t dictate terminal values; while different perspectives probably mean the evolution of different value systems even given identical hardwiring, that doesn’t necessarily reflect different terminal values. Those don’t reflect preferences but rather the algorithm by which preferences evolve; and self-interest is one module of that, not seven billion.
No, I think people can be persuaded on terminal values, although to an extent that modifies my response above; rationality will tell you that certain values are more likely to conflict, and noticing internal contradictions—pitting two vales against each other—is one way to convince someone to alter—or just adjust the relative worth of—their terminal values.
Due to the complexity of social reality I don’t think you are going to find too many with beliefs that are perfectly consistent; that is, any mainstream political affiliations is unlikely to be a shinning paragon of coherance and logical progression built upon core principles relative to its competitors.
But demonstrate with examples if I’m wrong.
If you can persuade someone to alter (not merely ignore) a value they believe to have been terminal, that’s good evidence that it wasn’t a terminal value.
This is only true if you think humans actually hold coherent values that are internally designated as “terminal” or “instrumental”. Humans only ever even designate statements as terminal values once you introduce them to the concept.
To clarify, I suspect most neurotypical humans may possess features of ethical development which map reasonably well to the notion of terminal values, although we don’t know their details (if we did, we’d be most of the way to solving ethics) or the extent to which they’re shared. I also believe that almost everyone who professes some particular terminal (fundamental, immutable) value is wrong, as evidenced by the fact that these not infrequently change.
It’s in reply to Quinton saying that there should be no masculine and feminine types of rationality. In other words, whether you are a man or a woman should not determine what the correct/rational answer is to a particular question (barring obvious exceptions). This is in stark contrast to asking whether or not political affiliation should be determined by how rational you are, which is another question entirely.
In other words: Just because correct answers to factual questions should not be determined by gender does not mean that political affiliation should not be determined by correct answers to factual questions.
I think political differences come down to values moreso than beliefs about facts. Rationalism doesn’t dictate terminal values.
Sometimes it is difficult to find out what is the different value and what is essentially the same value but different models.
For example two people can have a value of “it would be bad to destroy humanity”, but one of them has a model that humanity will likely destroy itself with ongoing capitalism, while the other has a model that humanity would be likely destroyed by some totalitarian movement like communism.
But instead of openly discussing their models and finding the difference, the former will accuse the latter of not caring about human suffering, and the latter will accuse the former of not caring about human suffering. Or they will focus on different applause lights, just to emphasise how different they are.
I probably underestimate the difference of values. Some people are psychopaths; and they might not be the only different group of people. But it seems to me that a lot of political mindkilling is connected with overestimating the difference, instead of admitting that our values in connection with a different model of the world would lead to different decisions. (Because our values are good, the different decisions are evil, and good cannot be evil, right?)
Just imagine that you would have a certain proof (by observing parallel universes, or by simulations done by superhuman AI) that e.g. a tolerance of homosexuality inevitably leads to a destruction of civilization, or that every civilization that invents nanotechnology inevitably destroys itself in nanotechnological wars unless the whole planet is united under rule of the communist party. If you had a good reason to believe these models, what would your values make you do?
(And more generally: If you meet a person with strange political opinions, try to imagine a least convenient world, where your values would lead to the same opinions. Even if that would be a wrong model of our world, it still may be the model the other person believes to be correct.)
I agree, though I’ll add that what facts people find plausible are shaped by their values.
Perfect information scenarios are useful in clarifying some cases, I suppose (and lets go with the non-humanity destroying option every time) but I don’t find them to map too closely to actual situations.
I’m not sure I can aptly articulate by intuition here. By differences in values, I don’t really think people will differ so much as to have much difference in terminal values should they each make a list of everything they would want in a perfect world (barring outliers). But the relative weights that people place on them, while differing only slightly, may end up suggesting quite different policy proposals, especially in a world of imperfect information, even if each is interested in using reason.
But I’ll concede that some ideologies are much more comfortable with more utilitarian analysis versus more rigid imperatives that are more likely to yield consistent results.
I’m always a little suspicious of this line of thinking. Partly because the terminal/instrumental value division isn’t very clean in humans—since more deeply ingrained values are harder to break regardless of their centrality, and we don’t have very good introspective access to value relationships, it’s remarkably difficult to unambiguously nail down any terminal values in real people. Never mind figuring out where they differ. But more importantly, it’s just too convenient: if you and your political enemies have different fundamental values, you’ve just managed to absolve yourself of any responsibility for argument. That’s not connotationally the same as saying the people you disagree with are all evil mutants or hapless dupes, but it’s functionally pretty damn close.
That doesn’t prove it wrong, of course, but I do think it’s grounds for caution.
How about different factions (landowners, truck drivers, soldiers, immigrants, etc.) all advocating their own interests? Doesn’t that count as “different values”?
Or, more simply, I value myself and my family, you value yourself and your family, so we have dufferent values. Ideologies are just a more general and complicated form.
Well, it depends what you mean by values. I was mainly discussing Randy_M’s comment that rationalism doesn’t dictate terminal values; while different perspectives probably mean the evolution of different value systems even given identical hardwiring, that doesn’t necessarily reflect different terminal values. Those don’t reflect preferences but rather the algorithm by which preferences evolve; and self-interest is one module of that, not seven billion.
No, I think people can be persuaded on terminal values, although to an extent that modifies my response above; rationality will tell you that certain values are more likely to conflict, and noticing internal contradictions—pitting two vales against each other—is one way to convince someone to alter—or just adjust the relative worth of—their terminal values. Due to the complexity of social reality I don’t think you are going to find too many with beliefs that are perfectly consistent; that is, any mainstream political affiliations is unlikely to be a shinning paragon of coherance and logical progression built upon core principles relative to its competitors. But demonstrate with examples if I’m wrong.
If you can persuade someone to alter (not merely ignore) a value they believe to have been terminal, that’s good evidence that it wasn’t a terminal value.
This is only true if you think humans actually hold coherent values that are internally designated as “terminal” or “instrumental”. Humans only ever even designate statements as terminal values once you introduce them to the concept.
I don’t think we disagree.
To clarify, I suspect most neurotypical humans may possess features of ethical development which map reasonably well to the notion of terminal values, although we don’t know their details (if we did, we’d be most of the way to solving ethics) or the extent to which they’re shared. I also believe that almost everyone who professes some particular terminal (fundamental, immutable) value is wrong, as evidenced by the fact that these not infrequently change.
If terminal values are definitionally immutable, than I used the wrong term.