So there’s no “quantum leap”, that is promised by meta-rationalists, or am I missing something?
There is no such thing as a too little molehill to make a mountain out of. But there are at least two things I noticed you missed here:
First, your description of rationalists is too charitable. On meta-rationalist websites they are typically described as unable to reason about systems, not understanding that their map is not the territory, prone to wishful thinking, and generally as what we call “Vulcan rationalists”. (Usually with a layer of plausible deniability, e.g. on one page it is merely said that rationalists are a subset of “eternalists”, with a hyperlink to other page that describes “eternalists” as having the aforementioned traits. Each of these claims can be easily defended separately, considering that “eternalists” is a made-up word.) With rationalists defined as this, it is easy to see how the other group is superior.
Second, you miss the implication that people disagreeing with meta-rationality are just immature children. There is a development scale from 0 to 5, where meta-rationalists are at level 5, rationalists are at level 4, and everyone else is at some of the lower levels.
Another way to express this is the concept of fluidity/nebulosity/whatever, which works like this: You make a map, and place everyone you know as some specific point on this map. (You can then arrange them into groups, etc.) The important part is that you refuse to place yourself on this map; instead you insist that you are always freely choosing the appropriate point to use in given situation, this getting all the advantages and none of the disadvantages; while everyone else is just hopelessly stuck at their one point. This obviously makes you the coolest guy in the town—of course until someone else comes with their map, where you get stuck at one specific point, and they get to be the one above the map. (In some sense, this is what Eliezer also tried with his “winning” and “nameless virtue”, only to get reduced to “meh, Kegan level 4” regardless.)
While I am sad you’ve gotten this impression of what we’re here calling meta-rationality, I also don’t have a whole lot to say to convince you otherwise. We have often been foolish when first exploring these ideas and write about them in ways that do have status implications and I think we’ve left a bad taste in everyone’s mouths over it, plus there’s an echo of the second-hand post-modernists’ tendency to view themselves as better than everyone else (although to be fair this is nothing new in intellectualism; just the most recent version of it that has a similar form).
That said, I do want to address one point you bring up because it might be a misunderstanding of the meta-rationalist position.
The important part is that you refuse to place yourself on this map; instead you insist that you are always freely choosing the appropriate point to use in given situation, this getting all the advantages and none of the disadvantages; while everyone else is just hopelessly stuck at their one point.
I’m not sure who thinks they have this degree of freedom, but the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else. Thus where we are in the territory greatly influences the kind of map we can draw, to the point that we cannot even hope to draw what we might call an ideal map because all maps will necessarily carry assumptions imposed by the place of observation.
This doesn’t mean that we can always choose whatever perspective to use in a given situation, but rather that we must acknowledge the non-primacy of any particular perspective (unless we impose a purpose against which to judge) and can then, from our relatively small part of the territory from which we can observe to draw our map, use information provided to us by the map to reasonably simulate how the map would look if we could view the territory from a different place and then update our map based on this implied information.
To me it seems rationalists/scientists/theologians/etc. are the ones who have the extra degree of freedom because, although from the inside they restrict themselves to a particular perspective judged on some desirable criteria, those criteria are chosen without being fully constrained, and thus between individuals there is no mechanism of consensus if their preferences disagree. But I understand that from the rationalist perspective this probably looks reversed because by taking the thing that creates different perspectives and puts it in the map a seemingly fundamental preference disagreement becomes part of the perspective.
(In some sense, this is what Eliezer also tried with his “winning” and “nameless virtue”, only to get reduced to “meh, Kegan level 4” regardless.)
I think there are plenty of things in LW rationality that point to meta-rationality, and I think that’s why we’re engaged with this community and many people have come to the meta-rationality position through LW rationality (hence even why it’s being called that among other names like post-rationality). That said, interacting with many rationalists (or if we were all being more humble what we might call aspiring rationalists) and talking to them they express having at most episteme of ideas around “winning” and “nameless virtue” and not gnosis. The (aspiring) meta-rationalists are claiming they do have gnosis here, though to be fair we’re mostly offering doxia as evidence because we’re still working on having episteme ourselves.
This need not be true of all self-identified rationalists, of course, but if we are trying to make a distinction between views people seem to hold within the rationalist discourse and “rationalist” is the self-identification term used by many people on one side the the distinction, then choosing another name for those of us who wish to identify on the other side seems reasonable. I myself now try to avoid categorization of people and instead focus on categorization of thought in the language I use to describe these ideas, although I’ve not done that here to remain anchored on the terms already in use in this discussion. I instead like to talk about people thinking in particular ways that the limits those ways of thinking have since we don’t make our thinking, so to speak, but our thinking makes us. This better reflects the way I actually think about these concepts, but unfortunately the most worked out ideas in meta-rational discourse are not evenly distributed yet.
Thank you for more bits of information that answer my original question in this thread. You have my virtual upvote :)
After reading a bit more about meta-rationality and observing how my perspective changes when I try to think this way, I’ve come to an opinion that the “disagreement on priorities”, as I have originally called it, is more significant than I originally acknowledged.
To give an example, if one adopts the science-based map (SBM) as the foundation of their thinking for most practical purposes and only checks the other maps when the SBM doesn’t work (or when modelling other people), they will see the world differently from a person who routinely tries to adopt multiple different perspectives when exploring every problem they face. Even though technically their world views are the same, the different priorities (given that both have bounded computational resources) will lead them to exploring different parts of the solution space and potentially finding different insights. The differences can accumulate through updating in different directions, so, at least in theory, their world views can drift apart to a significant degree.
… the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else.
Again, even though I see this idea as being part (or a trivial consequence) of LW-rationality, focusing your attention on how your map is influenced by where you are in the territory gives new insights.
So my current take aways are: as rationalists that agree with meta-rationalists on (meta-)epistemological foundations we should consider updating our epistemological priorities in the direction that they are advocating; if we can figure out ways to formulate meta-rationalist ideas in a less inscrutable way with less nebulosity, we should do so—it will benefit everyone; we should look into what meta-rationalists have to say about creativity / hypothesis generation—perhaps it will help with formulating a general high level theory of creative thinking (and if we do it in a way that’s precise enough to be programmed into computers, that would be pretty significant).
Ok, I think I get it. So basically, pissing contests put aside, meta-rationalists should probably just concede that LW-style rationalists are also meta-rational and have a constructive discussion about better ways of thinking (I’ve actually seen a bit of this, for example in the comments to this post).
Judging from the tone of your comment, I gather that that’s the opposite of what many of them are doing. Well, that doesn’t really surprise me, but it’s kind of sad.
This is how it seems to me. I may be horribly wrong, of course. But the comments on what you linked...
my problem with the substantial advice on thinking that you give in this post is that… I don’t disagree with it. Nor do I really think that it contradicts anything that has been said on LW. In fact, if it was somewhat polished, cut into a set of smaller posts and posted on LW, I expect that it might get quite upvoted.
I’m not sure if you would find anyone on LW who would disagree!
what you’ve written so far would fit well into the LW consensus.
...are similar to how I often feel. It’s like the meta-rationalists are saying “rationalists are stupid because they don’t see X, Y, Z”, and I am like “but I agree with X, Y, Z, and at least two of them are actually mentioned in the Sequences, so why did you have to start with an assumption that rationalists obviously must be stupid?”
(I had a colleague at one job who always automatically assumed that other people were idiots, so whenever someone was talking about something this colleague knew about, he interrupted him with: “That is wrong. Here is how it actually is: .” And a few times other people were like: “Hey, but you just repeated in different words what he was already saying before your interrupted him!” The guy probably didn’t notice, because he wasn’t paying attention.)
I am aware of my own hostility in this debate, but it is quite difficult for me to be charitable towards someone who pretty much defines themselves as “better than you” (the “meta-” prefix), proceeds with strawmanning you and refuses to update, and concludes that they are morally superior to you (the Kegan scale). Neither of this seems like an evidence that the other side is open to cooperation.
There is no such thing as a too little molehill to make a mountain out of. But there are at least two things I noticed you missed here:
First, your description of rationalists is too charitable. On meta-rationalist websites they are typically described as unable to reason about systems, not understanding that their map is not the territory, prone to wishful thinking, and generally as what we call “Vulcan rationalists”. (Usually with a layer of plausible deniability, e.g. on one page it is merely said that rationalists are a subset of “eternalists”, with a hyperlink to other page that describes “eternalists” as having the aforementioned traits. Each of these claims can be easily defended separately, considering that “eternalists” is a made-up word.) With rationalists defined as this, it is easy to see how the other group is superior.
Second, you miss the implication that people disagreeing with meta-rationality are just immature children. There is a development scale from 0 to 5, where meta-rationalists are at level 5, rationalists are at level 4, and everyone else is at some of the lower levels.
Another way to express this is the concept of fluidity/nebulosity/whatever, which works like this: You make a map, and place everyone you know as some specific point on this map. (You can then arrange them into groups, etc.) The important part is that you refuse to place yourself on this map; instead you insist that you are always freely choosing the appropriate point to use in given situation, this getting all the advantages and none of the disadvantages; while everyone else is just hopelessly stuck at their one point. This obviously makes you the coolest guy in the town—of course until someone else comes with their map, where you get stuck at one specific point, and they get to be the one above the map. (In some sense, this is what Eliezer also tried with his “winning” and “nameless virtue”, only to get reduced to “meh, Kegan level 4” regardless.)
While I am sad you’ve gotten this impression of what we’re here calling meta-rationality, I also don’t have a whole lot to say to convince you otherwise. We have often been foolish when first exploring these ideas and write about them in ways that do have status implications and I think we’ve left a bad taste in everyone’s mouths over it, plus there’s an echo of the second-hand post-modernists’ tendency to view themselves as better than everyone else (although to be fair this is nothing new in intellectualism; just the most recent version of it that has a similar form).
That said, I do want to address one point you bring up because it might be a misunderstanding of the meta-rationalist position.
I’m not sure who thinks they have this degree of freedom, but the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else. Thus where we are in the territory greatly influences the kind of map we can draw, to the point that we cannot even hope to draw what we might call an ideal map because all maps will necessarily carry assumptions imposed by the place of observation.
This doesn’t mean that we can always choose whatever perspective to use in a given situation, but rather that we must acknowledge the non-primacy of any particular perspective (unless we impose a purpose against which to judge) and can then, from our relatively small part of the territory from which we can observe to draw our map, use information provided to us by the map to reasonably simulate how the map would look if we could view the territory from a different place and then update our map based on this implied information.
To me it seems rationalists/scientists/theologians/etc. are the ones who have the extra degree of freedom because, although from the inside they restrict themselves to a particular perspective judged on some desirable criteria, those criteria are chosen without being fully constrained, and thus between individuals there is no mechanism of consensus if their preferences disagree. But I understand that from the rationalist perspective this probably looks reversed because by taking the thing that creates different perspectives and puts it in the map a seemingly fundamental preference disagreement becomes part of the perspective.
I think there are plenty of things in LW rationality that point to meta-rationality, and I think that’s why we’re engaged with this community and many people have come to the meta-rationality position through LW rationality (hence even why it’s being called that among other names like post-rationality). That said, interacting with many rationalists (or if we were all being more humble what we might call aspiring rationalists) and talking to them they express having at most episteme of ideas around “winning” and “nameless virtue” and not gnosis. The (aspiring) meta-rationalists are claiming they do have gnosis here, though to be fair we’re mostly offering doxia as evidence because we’re still working on having episteme ourselves.
This need not be true of all self-identified rationalists, of course, but if we are trying to make a distinction between views people seem to hold within the rationalist discourse and “rationalist” is the self-identification term used by many people on one side the the distinction, then choosing another name for those of us who wish to identify on the other side seems reasonable. I myself now try to avoid categorization of people and instead focus on categorization of thought in the language I use to describe these ideas, although I’ve not done that here to remain anchored on the terms already in use in this discussion. I instead like to talk about people thinking in particular ways that the limits those ways of thinking have since we don’t make our thinking, so to speak, but our thinking makes us. This better reflects the way I actually think about these concepts, but unfortunately the most worked out ideas in meta-rational discourse are not evenly distributed yet.
Thank you for more bits of information that answer my original question in this thread. You have my virtual upvote :)
After reading a bit more about meta-rationality and observing how my perspective changes when I try to think this way, I’ve come to an opinion that the “disagreement on priorities”, as I have originally called it, is more significant than I originally acknowledged.
To give an example, if one adopts the science-based map (SBM) as the foundation of their thinking for most practical purposes and only checks the other maps when the SBM doesn’t work (or when modelling other people), they will see the world differently from a person who routinely tries to adopt multiple different perspectives when exploring every problem they face. Even though technically their world views are the same, the different priorities (given that both have bounded computational resources) will lead them to exploring different parts of the solution space and potentially finding different insights. The differences can accumulate through updating in different directions, so, at least in theory, their world views can drift apart to a significant degree.
Again, even though I see this idea as being part (or a trivial consequence) of LW-rationality, focusing your attention on how your map is influenced by where you are in the territory gives new insights.
So my current take aways are: as rationalists that agree with meta-rationalists on (meta-)epistemological foundations we should consider updating our epistemological priorities in the direction that they are advocating; if we can figure out ways to formulate meta-rationalist ideas in a less inscrutable way with less nebulosity, we should do so—it will benefit everyone; we should look into what meta-rationalists have to say about creativity / hypothesis generation—perhaps it will help with formulating a general high level theory of creative thinking (and if we do it in a way that’s precise enough to be programmed into computers, that would be pretty significant).
Ok, I think I get it. So basically, pissing contests put aside, meta-rationalists should probably just concede that LW-style rationalists are also meta-rational and have a constructive discussion about better ways of thinking (I’ve actually seen a bit of this, for example in the comments to this post).
Judging from the tone of your comment, I gather that that’s the opposite of what many of them are doing. Well, that doesn’t really surprise me, but it’s kind of sad.
This is how it seems to me. I may be horribly wrong, of course. But the comments on what you linked...
...are similar to how I often feel. It’s like the meta-rationalists are saying “rationalists are stupid because they don’t see X, Y, Z”, and I am like “but I agree with X, Y, Z, and at least two of them are actually mentioned in the Sequences, so why did you have to start with an assumption that rationalists obviously must be stupid?”
(I had a colleague at one job who always automatically assumed that other people were idiots, so whenever someone was talking about something this colleague knew about, he interrupted him with: “That is wrong. Here is how it actually is: .” And a few times other people were like: “Hey, but you just repeated in different words what he was already saying before your interrupted him!” The guy probably didn’t notice, because he wasn’t paying attention.)
I am aware of my own hostility in this debate, but it is quite difficult for me to be charitable towards someone who pretty much defines themselves as “better than you” (the “meta-” prefix), proceeds with strawmanning you and refuses to update, and concludes that they are morally superior to you (the Kegan scale). Neither of this seems like an evidence that the other side is open to cooperation.