I’ve only started reading this strand of thought recently, and haven’t yet made the connection to authoritarianism. I get that they reject modern liberalism, democracy, and the idea that everyone has equal potential, but do they also reject the idea of meritocracy and the notion that everyone aught to have equal opportunity? Do they also believe that an elite group should have large amounts of power over the majority? And do they also believe that different people have (non-minor) differences in intrinsic value as well as ability?
EDIT thoughts after reading the sources you linked:
Perhaps an anti-egalitarian can be thought of one who does not value equality as an intrinsic moral good? Even if everyone is valued equally, the optimal solution in terms of getting the most satisfaction to everyone does not necessarily involve everyone being satisfied in roughly equal measures.
Basically, on Haidt’s moral axis, the anti-egalitarians would score highly only on Harm Avoidance, and low on everything else...
...actually, come to think of it that’s almost how I scored when i took it a few years ago. − 3.7 harm, 2.0 fairness, 0 on everything else.
you’ve given yourself the label “authoritarian”. If you took Haidt’s test, did you score high on authoritarianism? (just trying to pin down what exactly is meant by authoritarianism in this case)
but do they also reject the idea of meritocracy and the notion that everyone aught to have equal opportunity?
I think it’s more important to look at absolute opportunity than relative opportunity.
That said, in my ideal world we all grow up together as one big happy family. (with exactly the right amount of drama)
Do they also believe that an elite group should have large amounts of power over the majority?
Yes, generally. Note that everything can be cast in a negative light by (in)appropriate choice of words.
The elites need not be human, or the majority need not be human.
My ideal world has an nonhuman absolute god ruling all, a human nobility, and nonhuman servants and npc’s.
And do they also believe that different people have (non-minor) differences in intrinsic value as well as ability?
Yes, people currently have major differences in moral value. This may or may not be bad, I’m not sure.
But again, I’m more concerned with people’s absolute moral value, which should be higher. (and just saying “I should just value everyone more” ie “lol I’ll multiply everyone’s utility by 5″ doesn’t do anything)
Basically, on Haidt’s moral axis, the anti-egalitarians would score highly only on Harm Avoidance, and low on everything else...
Dunno, you’d have to test them.
My general position on such systems is that all facets of human morality are valuable, and people throw them out/emphasize them for mostly signalling/memetic-infection reasons.
All of those axes sound really important.
you’ve given yourself the label “authoritarian”. If you took Haidt’s test, did you score high on authoritarianism? (just trying to pin down what exactly is meant by authoritarianism in this case)
Haven’t taken the test. Self-describing as an “authoritarian” can only really be understood in the wider social context where authority and hierarchy have been devalued.
So a more absolute description would be that I recognize the importance of strong central coordination in doing things (empirical/instrumental), and find such organization to have aesthetic value. For example, I would not want to organize my mind as a dozen squabbling “free” modules, and I think communities of people should be organized around strong traditions, priests, and leaders.
Of course I also value people having autonomy and individual adventure.
Haven’t taken the test. Self-describing as an “authoritarian” can only really be understood in the wider social context where authority and hierarchy have been devalued.
I think that’s really the crux of it. When someone says they are authoritarian, that doesn’t necessarily have anything to do with present/past authoritarian regimes.
My general position on such systems is that all facets of human morality are valuable
Isn’t that a bit recursive? Human morality defines what is valuable. Saying that a moral is valuable is implying some sort of meta-morality. If someone doesn’t assign “respect for authority” intrinsic value (though it may have utility in furthering other values), isn’t that …just the way it is?
My ideal world has an nonhuman absolute god ruling all, a human nobility, and nonhuman servants and npc’s
I think everyone’s ideal world is one where all our actions were directed by a being with access to the CEV of humanity (or, more accurately, each person wants humanity to be ruled by their own CEV). On LessWrong, that’s not even controversial—it would be by definition the pinnacle of rational behavior.
The question is intended to be answered with realistic limitations in mind. Given our current society (or maybe given our society within 50 years, assuming none of that “FOOM” stuff happens) is there a way to bring about a safe, stable authoritarian society which is better than our own? There’s no point to a political stance unless it has consequences for what actions one can take in the short term.
If someone doesn’t assign “respect for authority” intrinsic value (though it may have utility in furthering other values), isn’t that …just the way it is?
No. Generally people are confused about morality, and such statements are optimized for signalling rather than correctness with respect to their actual preferences.
For example, I could say that I am a perfectly altruistic utilitarian. This is an advantageous thing to claim in some circles, but it is also false. I claim that the same pattern applies to non-authoritarianism, having been there myself.
So when I say “all of it is valuable” I am rejecting the pattern “Some people value X, but they are confused and X is not real morality, I only value Y and Z” which is a common position to take wrt the authority and purity axes on haidt, because that is supposedly a difference between liberals and conservatives, hence ripe for in-group signalling.
If some people value X, consider the proposition that it is actually valuable. Sometimes it isn’t, and they’re just weird, but that’s rare, IMO.
The question is intended to be answered with realistic limitations in mind. Given our current society (or maybe given our society within 50 years, assuming none of that “FOOM” stuff happens) is there a way to bring about a safe, stable authoritarian society which is better than our own? There’s no point to a political stance unless it has consequences for what actions one can take in the short term.
You are asking me to do an extremely large computational project (designing not only a good human society, but a plausible path to it), based on assumptions I don’t think are realistic. I don’t have time for that. Some people do though:
Moldbug has written plenty about how such a society could function and come about (the reaction)
Yvain has also recently laid out his semi-plausible authoritarian human society (raikoth) (eugenics, absolute rule by computer, omnipresent surveillance, etc)
I expect moreright will have some interesting discussion of this as well.
You are asking me to do an extremely large computational project (designing not only a good human society, but a plausible path to it), based on assumptions I don’t think are realistic. I don’t have time for that. Some people do though:
Oh, I didn’t mean that I want you to outline a manifesto or plan or anything.
Do they also believe that an elite group should have large amounts of power over the majority?
was my original question. What I meant was more that if you identify as “authoritarian”, it implicitly means that you think that it is a goal worth working towards in the real world, rather than a platonic ideal. Obviously, if it were possible to ensure a ruler or ruling class competently served the interests of the people, dictatorship would be the best form of government—but someone who identifies as authoritarian is saying that they believe that this can actually happen and that if history had gone differently and we were under a certain brand of authoritarian right now we’d be better off.
I could say that I am a perfectly altruistic utilitarian. This is an advantageous thing to claim in some circles, but it is also false.
Hehe...you better expect to save quite a few lives if you want to justify staying alive with that preference set (you have organs that could be generating so much utility to so many people!).
“Some people value X, but they are confused and X is not real morality, I only value Y and Z” which is a common position to take wrt the authority and purity axes on haidt,
If you cross out ” but they are confused and X is not real morality” I guess I’m one of those people—I don’t think they are confused about what they value. I just think that I don’t share that value. The phrase “real morality” is senseless—I’m not a moral realist.
I suppose I could be confused about my own values, of course. But when I read Haidt’s work, I became better able to understand what my conservative friends would think about various situations. It improved my ability to empathize. It wouldn’t even have occurred to me to respect authority or purity intrinsically...I used to think that they just weren’t thinking clearly (whereas now I think it’s just a matter of different values)
was my original question. What I meant was more that if you identify as “authoritarian”, it implicitly means that you think that it is a goal worth working towards in the real world, rather than a platonic ideal. Obviously, if it were possible to ensure a ruler or ruling class competently served the interests of the people, dictatorship would be the best form of government—but someone who identifies as authoritarian is saying that they believe that this can actually happen and that if history had gone differently and we were under a certain brand of authoritarian right now we’d be better off.
This is a good point and I’m unsure of my answer. I need to think about that. It could be that authoritarianism is as unrealistic as anarchism (I used to be an anarchist, and decided the whole “somehow we will find a way to solve the military aggression problem” was too much apologetics. The “somehow we will make the dictator uncorruptible” may be similar apologetics).
That said, I do reject the idea that values depend on what’s convenient in reality, else I’d worship chaos. I value authority intrinsically whether or not there are realistic ways to design society to reflect that. In that sense perhaps “authoritarian” is a confusing word?
But by using reality to argue this point, I infer that you think it’s an empirical issue and that order and authority is intrinsically valuable, if we could somehow get it?
I suppose I could be confused about my own values, of course. But when I read Haidt’s work, I became better able to understand what my conservative friends would think about various situations. It improved my ability to empathize. It wouldn’t even have occurred to me to respect authority or purity intrinsically...I used to think that they just weren’t thinking clearly (whereas now I think it’s just a matter of different values)
Think hard whenever your “values” differ from other people. There is an anti-pattern of thought where you erroneously trace differences in belief to some justifiable difference because it allows you to stop thinking without being rude or losing face.
I think that the “different values” thing comes from the same source as the “agree to disagree” thing, and the difference is that there exists a convincing rationalization in the values case.
“Different values” is the polite way to say “this conversation is not worth my time, or otherwise annoying”, not an actual truth. If you confuse it for an actual truth, it acts as a Semantic Stopsign and prevents you from ever realizing your error, if there is one.
Tangent: Way too much of morality is based on signalling. “my values are whatever would be socially advantageous to claim were my values in this social context”.
EDIT: As further evidence for you, I used to have a negative visceral reaction to the idea of authority, and then decided after much thought that it wasn’t so bad and in fact kind of nice. So keep in mind that there are layers and layers of meaningless memetics to wade through before you get to anything like a fundamental value that could differ with someone else.
I used to have a negative visceral reaction to the idea of authority, and then decided after much thought that it wasn’t so bad and in fact kind of nice.
Hm… so if you change your mind about a value, does it no longer qualify as a fundamental value? I’m not sure if we are using the word “value” in the same way.
I think it was you posted a few months ago about moral uncertainty, and I think you also posted that humans are poorly described by utility functions.
If you believe that, you should agree that we don’t necessarily even have an actual set of moral axioms, underlying all the uncertainty and signaling. The term “fundamental value” implicitly implies a moral axiom in a utility function—and while it is a useful term under most contexts, I think it should be deconstructed for this conversation.
For most people, under the right conditions murder and torture can seem like a good idea. Smart people might act more as if they were under a set of axioms, but that’s just because they work hard at being consistent because inconsistency causes them negative feelings.
So when I say “different values” this is what I mean:
1) John’s anterior cingulate cortex doesn’t light up brightly in response to conflict. He thus does not feel any dissonance when believing two contradictory statements, and is not motivated to re-evaluate his model. Thus, he does not value consistency like me—we have different values. Understanding this, I don’t try to convince him of things by appealing to logical consistency, instead appealing directly to other instincts.
2) Sally’s amygdala activates in response to incest, thanks to the Westermarck instinct. She thus has more motivation to condemn incest between two consenting parties, even when there is no risk of children being involved.
Mine lights up in disgust too, but to a much lesser extent. I’d probably be against incest too, but I’ve set up a hopefully consistent memetic complex of values to prevent my ACC from bothering the rest of my brain, and being against incest would destroy the consistency.
Our values are thus different—Sally’s disgust became moral condemnation, my disgust is just a squick. If Sally could give me a reason to be against incest which didn’t create inconsistency for me, she might well change my view. If she’s also one of those that values consistency, I can change her view by pointing out the inconsistency. Or, I can desensitize her instinctive disgust through conditioning by showing her pictures and video of happy, healthy incestuous couples in love talking about their lives and struggles.
3) Bob has the connections from his ventromedial prefrontal cortex to his amygdala severed. He thus is not bothered by other people’s pain. I watch Bob carefully: because of the fact that he does not factor in other people’s pain into his calculations about his next action, I’m afraid he might hurt people I care about, which would bother me a lot. We have different values—but I can still influence Bob by appealing to his honor. He might still be motivated to genuinely respect authority, or to follow purity rules. If he’s like Sally, he might condemn incest “because it is gross”, but the feelings of inbred children might not weigh on his mind at all.
Basically, I see “values” as partly a set of ideas and partly an extension of “personality”. You can change someone’s values through argument, conditioning, etc...but between people there are often differences in the underlying motives which drive value creation, along with the layers of memetics.
(brain parts are roughly in line with current knowledge understanding of what they do, but take it with a grain of salt—the underlying point is more important)
Oh...so basically the whole Dark Enlightenment school of thought?
I’ve only started reading this strand of thought recently, and haven’t yet made the connection to authoritarianism. I get that they reject modern liberalism, democracy, and the idea that everyone has equal potential, but do they also reject the idea of meritocracy and the notion that everyone aught to have equal opportunity? Do they also believe that an elite group should have large amounts of power over the majority? And do they also believe that different people have (non-minor) differences in intrinsic value as well as ability?
EDIT thoughts after reading the sources you linked:
Perhaps an anti-egalitarian can be thought of one who does not value equality as an intrinsic moral good? Even if everyone is valued equally, the optimal solution in terms of getting the most satisfaction to everyone does not necessarily involve everyone being satisfied in roughly equal measures.
Basically, on Haidt’s moral axis, the anti-egalitarians would score highly only on Harm Avoidance, and low on everything else...
...actually, come to think of it that’s almost how I scored when i took it a few years ago. − 3.7 harm, 2.0 fairness, 0 on everything else.
you’ve given yourself the label “authoritarian”. If you took Haidt’s test, did you score high on authoritarianism? (just trying to pin down what exactly is meant by authoritarianism in this case)
Can’t speak for others, but here’s my take:
s/they/you:
I think it’s more important to look at absolute opportunity than relative opportunity.
That said, in my ideal world we all grow up together as one big happy family. (with exactly the right amount of drama)
Yes, generally. Note that everything can be cast in a negative light by (in)appropriate choice of words.
The elites need not be human, or the majority need not be human.
My ideal world has an nonhuman absolute god ruling all, a human nobility, and nonhuman servants and npc’s.
Yes, people currently have major differences in moral value. This may or may not be bad, I’m not sure.
But again, I’m more concerned with people’s absolute moral value, which should be higher. (and just saying “I should just value everyone more” ie “lol I’ll multiply everyone’s utility by 5″ doesn’t do anything)
Dunno, you’d have to test them.
My general position on such systems is that all facets of human morality are valuable, and people throw them out/emphasize them for mostly signalling/memetic-infection reasons.
All of those axes sound really important.
Haven’t taken the test. Self-describing as an “authoritarian” can only really be understood in the wider social context where authority and hierarchy have been devalued.
So a more absolute description would be that I recognize the importance of strong central coordination in doing things (empirical/instrumental), and find such organization to have aesthetic value. For example, I would not want to organize my mind as a dozen squabbling “free” modules, and I think communities of people should be organized around strong traditions, priests, and leaders.
Of course I also value people having autonomy and individual adventure.
I think that’s really the crux of it. When someone says they are authoritarian, that doesn’t necessarily have anything to do with present/past authoritarian regimes.
Isn’t that a bit recursive? Human morality defines what is valuable. Saying that a moral is valuable is implying some sort of meta-morality. If someone doesn’t assign “respect for authority” intrinsic value (though it may have utility in furthering other values), isn’t that …just the way it is?
I think everyone’s ideal world is one where all our actions were directed by a being with access to the CEV of humanity (or, more accurately, each person wants humanity to be ruled by their own CEV). On LessWrong, that’s not even controversial—it would be by definition the pinnacle of rational behavior.
The question is intended to be answered with realistic limitations in mind. Given our current society (or maybe given our society within 50 years, assuming none of that “FOOM” stuff happens) is there a way to bring about a safe, stable authoritarian society which is better than our own? There’s no point to a political stance unless it has consequences for what actions one can take in the short term.
Sounds pretty dangerous.
No. Generally people are confused about morality, and such statements are optimized for signalling rather than correctness with respect to their actual preferences.
For example, I could say that I am a perfectly altruistic utilitarian. This is an advantageous thing to claim in some circles, but it is also false. I claim that the same pattern applies to non-authoritarianism, having been there myself.
So when I say “all of it is valuable” I am rejecting the pattern “Some people value X, but they are confused and X is not real morality, I only value Y and Z” which is a common position to take wrt the authority and purity axes on haidt, because that is supposedly a difference between liberals and conservatives, hence ripe for in-group signalling.
If some people value X, consider the proposition that it is actually valuable. Sometimes it isn’t, and they’re just weird, but that’s rare, IMO.
You are asking me to do an extremely large computational project (designing not only a good human society, but a plausible path to it), based on assumptions I don’t think are realistic. I don’t have time for that. Some people do though:
Moldbug has written plenty about how such a society could function and come about (the reaction)
Yvain has also recently laid out his semi-plausible authoritarian human society (raikoth) (eugenics, absolute rule by computer, omnipresent surveillance, etc)
I expect moreright will have some interesting discussion of this as well.
Oh, I didn’t mean that I want you to outline a manifesto or plan or anything.
was my original question. What I meant was more that if you identify as “authoritarian”, it implicitly means that you think that it is a goal worth working towards in the real world, rather than a platonic ideal. Obviously, if it were possible to ensure a ruler or ruling class competently served the interests of the people, dictatorship would be the best form of government—but someone who identifies as authoritarian is saying that they believe that this can actually happen and that if history had gone differently and we were under a certain brand of authoritarian right now we’d be better off.
Hehe...you better expect to save quite a few lives if you want to justify staying alive with that preference set (you have organs that could be generating so much utility to so many people!).
If you cross out ” but they are confused and X is not real morality” I guess I’m one of those people—I don’t think they are confused about what they value. I just think that I don’t share that value. The phrase “real morality” is senseless—I’m not a moral realist.
I suppose I could be confused about my own values, of course. But when I read Haidt’s work, I became better able to understand what my conservative friends would think about various situations. It improved my ability to empathize. It wouldn’t even have occurred to me to respect authority or purity intrinsically...I used to think that they just weren’t thinking clearly (whereas now I think it’s just a matter of different values)
This is a good point and I’m unsure of my answer. I need to think about that. It could be that authoritarianism is as unrealistic as anarchism (I used to be an anarchist, and decided the whole “somehow we will find a way to solve the military aggression problem” was too much apologetics. The “somehow we will make the dictator uncorruptible” may be similar apologetics).
That said, I do reject the idea that values depend on what’s convenient in reality, else I’d worship chaos. I value authority intrinsically whether or not there are realistic ways to design society to reflect that. In that sense perhaps “authoritarian” is a confusing word?
But by using reality to argue this point, I infer that you think it’s an empirical issue and that order and authority is intrinsically valuable, if we could somehow get it?
Think hard whenever your “values” differ from other people. There is an anti-pattern of thought where you erroneously trace differences in belief to some justifiable difference because it allows you to stop thinking without being rude or losing face.
I think that the “different values” thing comes from the same source as the “agree to disagree” thing, and the difference is that there exists a convincing rationalization in the values case.
“Different values” is the polite way to say “this conversation is not worth my time, or otherwise annoying”, not an actual truth. If you confuse it for an actual truth, it acts as a Semantic Stopsign and prevents you from ever realizing your error, if there is one.
Tangent: Way too much of morality is based on signalling. “my values are whatever would be socially advantageous to claim were my values in this social context”.
EDIT: As further evidence for you, I used to have a negative visceral reaction to the idea of authority, and then decided after much thought that it wasn’t so bad and in fact kind of nice. So keep in mind that there are layers and layers of meaningless memetics to wade through before you get to anything like a fundamental value that could differ with someone else.
Hm… so if you change your mind about a value, does it no longer qualify as a fundamental value? I’m not sure if we are using the word “value” in the same way.
I think it was you posted a few months ago about moral uncertainty, and I think you also posted that humans are poorly described by utility functions.
If you believe that, you should agree that we don’t necessarily even have an actual set of moral axioms, underlying all the uncertainty and signaling. The term “fundamental value” implicitly implies a moral axiom in a utility function—and while it is a useful term under most contexts, I think it should be deconstructed for this conversation.
For most people, under the right conditions murder and torture can seem like a good idea. Smart people might act more as if they were under a set of axioms, but that’s just because they work hard at being consistent because inconsistency causes them negative feelings.
So when I say “different values” this is what I mean:
1) John’s anterior cingulate cortex doesn’t light up brightly in response to conflict. He thus does not feel any dissonance when believing two contradictory statements, and is not motivated to re-evaluate his model. Thus, he does not value consistency like me—we have different values. Understanding this, I don’t try to convince him of things by appealing to logical consistency, instead appealing directly to other instincts.
2) Sally’s amygdala activates in response to incest, thanks to the Westermarck instinct. She thus has more motivation to condemn incest between two consenting parties, even when there is no risk of children being involved.
Mine lights up in disgust too, but to a much lesser extent. I’d probably be against incest too, but I’ve set up a hopefully consistent memetic complex of values to prevent my ACC from bothering the rest of my brain, and being against incest would destroy the consistency.
Our values are thus different—Sally’s disgust became moral condemnation, my disgust is just a squick. If Sally could give me a reason to be against incest which didn’t create inconsistency for me, she might well change my view. If she’s also one of those that values consistency, I can change her view by pointing out the inconsistency. Or, I can desensitize her instinctive disgust through conditioning by showing her pictures and video of happy, healthy incestuous couples in love talking about their lives and struggles.
3) Bob has the connections from his ventromedial prefrontal cortex to his amygdala severed. He thus is not bothered by other people’s pain. I watch Bob carefully: because of the fact that he does not factor in other people’s pain into his calculations about his next action, I’m afraid he might hurt people I care about, which would bother me a lot. We have different values—but I can still influence Bob by appealing to his honor. He might still be motivated to genuinely respect authority, or to follow purity rules. If he’s like Sally, he might condemn incest “because it is gross”, but the feelings of inbred children might not weigh on his mind at all.
Basically, I see “values” as partly a set of ideas and partly an extension of “personality”. You can change someone’s values through argument, conditioning, etc...but between people there are often differences in the underlying motives which drive value creation, along with the layers of memetics.
(brain parts are roughly in line with current knowledge understanding of what they do, but take it with a grain of salt—the underlying point is more important)