I believed the inscrutability was intentional; a Dark Arts technique where no one can refute my position if no one can agree on what my position actually is. Then all criticism can be summarily dismissed by a courtier’s reply or some mystical ad-hominem like “you are too low on Kegan scale to even understand what we are saying.”
Fixing this would be a huge improvement. It would make rational discussion actually possible.
Rationalists are afflicted with a frustrating Dunning-Kruger illusion: they cannot understand that there is something they cannot understand.
I am quite sure there are things that have Kolmogorov complexity larger than the size of my brain. As a trivial example, random strings of sufficiently large size.
But for the record, “there is something you don’t understand” doesn’t necessarily imply “therefore, I am right”. Neither does “there is some thing you don’t understand” necessarily imply “and it is this specific thing”.
Just as irrationalists can’t understand the rationalist critique, rationalists can’t understand the meta-rational critique.
Speaking for myself, I would mostly like to have some assurance that what you are trying to explain is not one of the following:
the Straw Vulcans are actually not rational;
people who identify as “rationalists” are still stupid humans and make a lot of stupid human mistakes;
some people who self-identify as “rationalists” are actually quite embarrassing;
and generally, merely applying a label on oneself does not make a human more rational;
some things cannot be modeled properly by too simple (e.g. linear) equations, because they have many important details;
armchair reasoning is not a substitute for actual empirical data;
(...and some other similar stuff I forgot now...)
Because if it happens to be one of those, then we already have an agreement, we just seem not to have the common knowledge that we already have the agreement.
And if it really is something else… then we have an agreement that a simpler explanation for the simple creatures too low on the Kegan scale would really be helpful.
.
Frankly, this all seems to me, from outside, like another status move. First you have a group that claims higher status by being inscrutable. And it works, just like it worked for many other groups. But then a problem happens: other people can play the inscrutability game, too.
You could try competing with them, but you actually don’t want to go too far in that direction. You want to stay within the shouting distance from rationalists; far enough to keep higher status, not too far to become irrelevant. If there only was a way to have your cake and eat it, too… Oh, there is one! You can do the same status move again, and counter-signal deep wisdom by being less inscrutable.
Congratulations on becoming the world’s first meta-meta-rationalist!
But this view does not make me appreciate your move less. Zero-sum status games within a group can still produce externalities for the rest of the world (for example, when rich people decide to compete by donating to charity instead of buying expensive cars). Looking forward to the positive externalities of meta-meta-rationality in form of insightful and easy-to-understand articles. If they are good, I will just call them “rationality” in my head. ;)
Except for the no-one-size-fits-all-epistemology epistemology with a flavor of Westernized Buddhism, I guess.
This is such a cheap trick I wonder why people keep falling for it: “Other people have epistemologies. I don’t have an epistemology. I have a meta-epistemology!” “Other people have beliefs. I don’t have beliefs. I have meta-beliefs!” “Other people use strategies. I don’t have a strategy. I have a meta-strategy!” “Other people use algorithms. I don’t use an algorithm. I use a meta-algorithm!” “Other people try to be rational. I don’t try to be rational. I try to be meta-rational!”
But this is not how it works. For certain definitions, meta-X is still a subset of X; you don’t get beyond X by saying “But I am meta!”. Universal Turing machine is still a Turing machine. A compiler or an interpreter is still a program. An algorithm which tries several algorithms and chooses the one which seems to work best, is still an algorithm. Saying “meta” is not a get-out-of-jail-free card. If a statement is true for all algorithms, it is also true for the “algorithm that tries several algorithms”; there is no free lunch.
Similarly, saying: “I don’t have an epistemology; instead I have several epistemologies, and I use different ones in different situations” is a kind of epistemology. Also, some important details are swept under the rug, for example: How do you choose which epistemology is appropriate for which situation? How do you choose which epistemologies to use at all? How do you create new epistemologies? How do you decide whether the existing ones need to be updated or even discarded? “I don’t have a system, I have multiple systems.” Yeah, but then you also need a system of systems, and that is a system. Closing your eyes does not make it go away.
But this is not how it works. For certain definitions, meta-X is still a subset of X;
And for others, it isn’t.
If a statement is true for all algorithms, it is also true for the “algorithm that tries several algorithms”;
Theoretically, but there is no such algorithm.
Similarly, saying: “I don’t have an epistemology; instead I have several epistemologies, and I use different ones in different situations” is a kind of epistemology.
But it’s not a single algorithmic epistemology.
Also, some important details are swept under the rug, for example: How do you choose which epistemology is appropriate for which situation?
How do you do anything for whcih there isn’t an algorithm? You use experience, intuition, and other system 1 stuff.
This is such a cheap trick
It isn’t in all cases. There is a genuine problem in telling whether a claim of radically superior knowledge is genuine, You can;t round them all off to fraud.
I think the idea of “meta-rationality” is that evaluating hypotheses using Bayes’ rule isn’t the limiting factor for humans. The hard part is coming up with good hypotheses. For that you need to make your mind a bit crazy and free, in a way that’s hard to describe in Bayesian terms. That applies to both science and art, and LW doesn’t really equip you for it.
If common sense, and its associated logic, could answer the questions it poses, there would never have been philosophy, or a desire for it. The classic philosophical questions like “where did everything come from” cannot be answered by common sense, and therefore the answer, whatever it is, must be crazy, in common sense terms.
To take an analogy, mathematicians could not answer every question in terms of integers, the counting numbers of common sense, so they had to invent fractions, and then real numbers, and eventually complex numbers, which include the notoriously crazy “imaginary” numbers.
And then there is physics...the subject that gave us, thanks to Wolfgang Pauli, the phrase “not crazy enough to be true”.
And physics gave us physicalism, the default philosophy of sensible, uncrazy philosophers. What can that tell us about the origins of the world, the nature of consciousness, or how one should live ones life? If you are lucky, the answer will be “consult science”..if not it will be “don’t ask that question”. Denial, dissolution and deflation are the fate of anything that doesn’t fit physicalism’s Procrustean bed. Consciousness is an illusion, ethics merely subjective, and so on.
Yeah, many people in the rationalist community are too eager to dissolve words and miss out on interesting thoughts as a result. It’s an unfortunate habit, but thankfully some of the most productive people in the community don’t share it. For example, Nick Bostrom, whose philosophy is pretty much LW, is interested in the nature of consciousness. Wei Dai, who created UDT, is interested in metaethics and doesn’t dismiss it. And so on. I think such “steelmanning” of philosophical ideas in search of new hypotheses is more productive than “tabooing” them. That said, we should still be wary of sliding into woo, and LW ideas are a good antidote for that.
I’ve read the article and then also A first lesson in meta-rationality but I must confess I still have no idea what he’s talking about. The accusations of inscrutability seem to be spot on.
Perhaps I should read more about meta-rationality to get it, but just to keep me motivated, can anyone explain in simple terms what the deal is about, or perhaps give me an example of meta-rationalist belief that rationalists don’t share?
I’d say the biggest different you’ll notice that affects the most things is the change in epistemology.
Rationalist epistemology and the epistemology of other similar “rational” systems of thought (cf. scientism, theology) assumes there is a single correct way of understanding the world, which rationalists perhaps having the high ground in viewing the project as finding the correct epistemology regardless of what it implies.
The meta-rationalists/post-modern position is that this is not possible because epistemology necessarily influences ontology so we cannot possibly have a single “correct” understanding of the world. In this view an epistemology and the ontology it produces can at best be useful to some telos (purpose) but we cannot assign one the prime position as the “correct” ontology for metaphysical reality because we have no way to decide what “correct” is that is independent of the epistemology in which we develop our understanding of “correct”. Thus the epistemology of rationality, which seems to target most accurately predicting reality based on known information, is but one useful way of understanding the world within the meta-rationalist/post-modern view, and others may be more useful for serving other purposes.
Both stand in contrast to the pre-rational approach to epistemology which does not assume everything is knowable and will accept mystery where explanation is not available.
Not sure if that really achieves the “simple terms” aim, so maybe I can put it like this:
The pre-rational person can’t know some things. The rational person doesn’t know some things. The meta-rational person knows they can’t know some things.
Thank you, this is a pretty clear explanation. I did read a bit more from meaningness.com yesterday and what I gathered was also pointing in the direction of this sort of meta-epistemological relativism.
However, I still don’t really see a significant disagreement. The map/territory distinction, which I see as one of the key ideas of rationalism, seems to be exactly about this. So I see rationalism as saying “the map is not the territory and you never have unmediated access to the territory but you can make maps that are more or less useful in different contexts for different purposes; here are some tools for mapmaking and updating, and btw, the maps produced by science are great for most purposes, so we mostly use those and build new ones on top of them”.
So with what I learned so far, if I try to formulate the disagreement, it would probably be something like this:
Rationalists: sure, no map is objectively true, but those science maps work really well for most purposes and a lot of people are working on improving them; everyone else would better invest their time by building on top of science maps; also bayesian updating is the best method for deciding how to update your map.
Meta-rationalists: yeah, science maps are pretty awesome, but non-scientific maps work rather well for some people too, so we should pay attention to those as well; bayesian updating is great, but that’s the easy part—the hard part is formulating the hypotheses.
I’m not sure if I’m capturing most of the disagreement here, but at least this part seems to be more about different priorities rather than fundamentally different world views. So there’s no “quantum leap”, that is promised by meta-rationalists, or am I missing something?
So there’s no “quantum leap”, that is promised by meta-rationalists, or am I missing something?
There is no such thing as a too little molehill to make a mountain out of. But there are at least two things I noticed you missed here:
First, your description of rationalists is too charitable. On meta-rationalist websites they are typically described as unable to reason about systems, not understanding that their map is not the territory, prone to wishful thinking, and generally as what we call “Vulcan rationalists”. (Usually with a layer of plausible deniability, e.g. on one page it is merely said that rationalists are a subset of “eternalists”, with a hyperlink to other page that describes “eternalists” as having the aforementioned traits. Each of these claims can be easily defended separately, considering that “eternalists” is a made-up word.) With rationalists defined as this, it is easy to see how the other group is superior.
Second, you miss the implication that people disagreeing with meta-rationality are just immature children. There is a development scale from 0 to 5, where meta-rationalists are at level 5, rationalists are at level 4, and everyone else is at some of the lower levels.
Another way to express this is the concept of fluidity/nebulosity/whatever, which works like this: You make a map, and place everyone you know as some specific point on this map. (You can then arrange them into groups, etc.) The important part is that you refuse to place yourself on this map; instead you insist that you are always freely choosing the appropriate point to use in given situation, this getting all the advantages and none of the disadvantages; while everyone else is just hopelessly stuck at their one point. This obviously makes you the coolest guy in the town—of course until someone else comes with their map, where you get stuck at one specific point, and they get to be the one above the map. (In some sense, this is what Eliezer also tried with his “winning” and “nameless virtue”, only to get reduced to “meh, Kegan level 4” regardless.)
While I am sad you’ve gotten this impression of what we’re here calling meta-rationality, I also don’t have a whole lot to say to convince you otherwise. We have often been foolish when first exploring these ideas and write about them in ways that do have status implications and I think we’ve left a bad taste in everyone’s mouths over it, plus there’s an echo of the second-hand post-modernists’ tendency to view themselves as better than everyone else (although to be fair this is nothing new in intellectualism; just the most recent version of it that has a similar form).
That said, I do want to address one point you bring up because it might be a misunderstanding of the meta-rationalist position.
The important part is that you refuse to place yourself on this map; instead you insist that you are always freely choosing the appropriate point to use in given situation, this getting all the advantages and none of the disadvantages; while everyone else is just hopelessly stuck at their one point.
I’m not sure who thinks they have this degree of freedom, but the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else. Thus where we are in the territory greatly influences the kind of map we can draw, to the point that we cannot even hope to draw what we might call an ideal map because all maps will necessarily carry assumptions imposed by the place of observation.
This doesn’t mean that we can always choose whatever perspective to use in a given situation, but rather that we must acknowledge the non-primacy of any particular perspective (unless we impose a purpose against which to judge) and can then, from our relatively small part of the territory from which we can observe to draw our map, use information provided to us by the map to reasonably simulate how the map would look if we could view the territory from a different place and then update our map based on this implied information.
To me it seems rationalists/scientists/theologians/etc. are the ones who have the extra degree of freedom because, although from the inside they restrict themselves to a particular perspective judged on some desirable criteria, those criteria are chosen without being fully constrained, and thus between individuals there is no mechanism of consensus if their preferences disagree. But I understand that from the rationalist perspective this probably looks reversed because by taking the thing that creates different perspectives and puts it in the map a seemingly fundamental preference disagreement becomes part of the perspective.
(In some sense, this is what Eliezer also tried with his “winning” and “nameless virtue”, only to get reduced to “meh, Kegan level 4” regardless.)
I think there are plenty of things in LW rationality that point to meta-rationality, and I think that’s why we’re engaged with this community and many people have come to the meta-rationality position through LW rationality (hence even why it’s being called that among other names like post-rationality). That said, interacting with many rationalists (or if we were all being more humble what we might call aspiring rationalists) and talking to them they express having at most episteme of ideas around “winning” and “nameless virtue” and not gnosis. The (aspiring) meta-rationalists are claiming they do have gnosis here, though to be fair we’re mostly offering doxia as evidence because we’re still working on having episteme ourselves.
This need not be true of all self-identified rationalists, of course, but if we are trying to make a distinction between views people seem to hold within the rationalist discourse and “rationalist” is the self-identification term used by many people on one side the the distinction, then choosing another name for those of us who wish to identify on the other side seems reasonable. I myself now try to avoid categorization of people and instead focus on categorization of thought in the language I use to describe these ideas, although I’ve not done that here to remain anchored on the terms already in use in this discussion. I instead like to talk about people thinking in particular ways that the limits those ways of thinking have since we don’t make our thinking, so to speak, but our thinking makes us. This better reflects the way I actually think about these concepts, but unfortunately the most worked out ideas in meta-rational discourse are not evenly distributed yet.
Thank you for more bits of information that answer my original question in this thread. You have my virtual upvote :)
After reading a bit more about meta-rationality and observing how my perspective changes when I try to think this way, I’ve come to an opinion that the “disagreement on priorities”, as I have originally called it, is more significant than I originally acknowledged.
To give an example, if one adopts the science-based map (SBM) as the foundation of their thinking for most practical purposes and only checks the other maps when the SBM doesn’t work (or when modelling other people), they will see the world differently from a person who routinely tries to adopt multiple different perspectives when exploring every problem they face. Even though technically their world views are the same, the different priorities (given that both have bounded computational resources) will lead them to exploring different parts of the solution space and potentially finding different insights. The differences can accumulate through updating in different directions, so, at least in theory, their world views can drift apart to a significant degree.
… the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else.
Again, even though I see this idea as being part (or a trivial consequence) of LW-rationality, focusing your attention on how your map is influenced by where you are in the territory gives new insights.
So my current take aways are: as rationalists that agree with meta-rationalists on (meta-)epistemological foundations we should consider updating our epistemological priorities in the direction that they are advocating; if we can figure out ways to formulate meta-rationalist ideas in a less inscrutable way with less nebulosity, we should do so—it will benefit everyone; we should look into what meta-rationalists have to say about creativity / hypothesis generation—perhaps it will help with formulating a general high level theory of creative thinking (and if we do it in a way that’s precise enough to be programmed into computers, that would be pretty significant).
Ok, I think I get it. So basically, pissing contests put aside, meta-rationalists should probably just concede that LW-style rationalists are also meta-rational and have a constructive discussion about better ways of thinking (I’ve actually seen a bit of this, for example in the comments to this post).
Judging from the tone of your comment, I gather that that’s the opposite of what many of them are doing. Well, that doesn’t really surprise me, but it’s kind of sad.
This is how it seems to me. I may be horribly wrong, of course. But the comments on what you linked...
my problem with the substantial advice on thinking that you give in this post is that… I don’t disagree with it. Nor do I really think that it contradicts anything that has been said on LW. In fact, if it was somewhat polished, cut into a set of smaller posts and posted on LW, I expect that it might get quite upvoted.
I’m not sure if you would find anyone on LW who would disagree!
what you’ve written so far would fit well into the LW consensus.
...are similar to how I often feel. It’s like the meta-rationalists are saying “rationalists are stupid because they don’t see X, Y, Z”, and I am like “but I agree with X, Y, Z, and at least two of them are actually mentioned in the Sequences, so why did you have to start with an assumption that rationalists obviously must be stupid?”
(I had a colleague at one job who always automatically assumed that other people were idiots, so whenever someone was talking about something this colleague knew about, he interrupted him with: “That is wrong. Here is how it actually is: .” And a few times other people were like: “Hey, but you just repeated in different words what he was already saying before your interrupted him!” The guy probably didn’t notice, because he wasn’t paying attention.)
I am aware of my own hostility in this debate, but it is quite difficult for me to be charitable towards someone who pretty much defines themselves as “better than you” (the “meta-” prefix), proceeds with strawmanning you and refuses to update, and concludes that they are morally superior to you (the Kegan scale). Neither of this seems like an evidence that the other side is open to cooperation.
It’s not important whether someone can tell you about how the map isn’t the territory when you ask them, the important thing is how they reason in practice.
You are steelmanning the rationalist position; many rationalists do say, either explicitly or implicitly, that there is one true map, which they more or less identify with the territory, and that they have it.
That could very well be. I had an impression that meta-rationalists are arguing against a strawman, but that would just mean we disagree about the definition of “rationalist position”.
I agree that one-true-map rationalism is rather naive and that there are many people who hold this position, but I haven’t seen much of this on LW. Actually, LW contains the clearest description of the map/territory relationship that I’ve seen, no nebulosity or any of that stuff.
That is probably the first post of that site that doesn’t make me want to pull my hair out. I am too trying to create a taxonomy of the antithetical approaches to rationality. So far, I’ve classified them in two dimension: the existence of an objective truth (yes/no) and the fallibility of some part of human understanding (yes/no). On the other hand, I don’t feel that some political movements are against the method of rationality, as much as they are against the actual content of what said method has discovered. If one had the patience and if an alt-righter (say) had “climate change is a hoax” as a true objection, I believe that one could theoretically arrive at an agreement on what are the facts and what is a fair interpretation. Basically, I would just chuck political movements and their associated moral panic as generic tribalism.
Meta-rationalists have been promising a coherent account of meaning for nearly a century. Somehow, we’ve never delivered, although we think we understand it quite well. It’s time we put up or shut up.
I wholeheartedly agree on the last sentence. I also believe that to ‘overcome’ rationality, one needs transfinite computation, so good luck with that.
Just to locate it amongst the possible systematic approaches to life, that might serve as an introduction—for the general public—to what rationality is and how to practice it, to be put for example in a book or in a Youtube channel.
(cross-posting my comment on this from the original because i think it might be of more interest here)
I might write a more detailed response along these lines depending on where my thinking takes me, but I’ve previously thought about this issue and after thinking about it more since reading this yesterday it still seems to me that meta-rationality is specifically inscrutable because it needs meta-rationality to explain itself.
In fairness this is also a problem for rationality, too, because it can’t really explain itself in terms of pre-rationality, and from what I can tell we actually don’t know that well how to teach rationality either. STEM education mostly seems to teach some of the methods of rationality, like how to use logic to manipulate symbols, but tends to do so in a way that ends up domain restricted. Most STEM graduates are still pre-rational thinkers in most domains of their lives, though they may dress up their thoughts in the language of rationality, and this is specifically what projects like LessWrong are all about: getting people to at least be actually rational rather than pre-rational in rationalist garb.
But even with CFAR and other efforts LW seems to be only marginally more successful than most because I know a lot of LW/CFAR folks who have read, written, and thought about rationality a lot and they still struggle with many of the basics to not only adopt the rationalist world view but to also at least stop using the pre-rationalist world view and instead notice they don’t understand something. To be fair marginal success is all LW needed to achieve to satisfy its goals of producing a supply of people capable of doing AI safety research, but I think it’s telling that even such a project so directed to making rationality learnable has only been marginally successful and from what I can tell not by making rationality scrutable but by creating lots of opportunities for enlightenment.
Given that we don’t even have a good model of how to make rationality truly scrutable, I’m not sure we can really hope to make meta-rationality scrutable. What seems to me more likely is that we can work to find ways of not explaining meta-rationality but training people into it. Of course this is already what you’re doing with Meaningness, but it’s also for this reason I’m not sure we can do more than what Meaningness has so far been working to accomplish.
But if meta-rationality is unscrutable for rationality, how do you know it even exists? At least, Bayesian rationalists have some solace in Cox’s theorem, or the coherence theorem, or the Church-Turing thesis. What stops me from declaring there’s a sigma-rationality, which is unscrutable by all n-rationality below them? What does meta-rationality even imply, for the real world?
You are taking the inscrutability thing a bit too strongly. Someone can transition from rationality to meta rationality, just as someone can transition to pre rationality to rationality. Pre rationality has limitations because yelling tribal slogans at people who aren’t in your tribe doesn’t work.
What does meta-rationality even imply, for the real world?
What does rationality imply? You can’t actually run Solomonoff Induction, so FAPP you are stuck with a messy plurality of approximations. And if you notice that problem...
But if meta-rationality is unscrutable for rationality, how do you know it even exists?
You see the holes rationality doesn’t fill and the variables it doesn’t constrain and then you go looking for how you could fill them in and constrain them.
What stops me from declaring there’s a sigma-rationality, which is unscrutable by all n-rationality below them?
Nothing. We are basically saying we’re in the position of applying a theory of types to ontology and meta-rationality is just one layer higher than rationality. We could of course go on forever, but being bounded that’s not an option. There is of course some kind of meta-meta-rationality ontological type and on up for any n, but working with it is another matter.
But once you realize you’re in this position you notice that type theory doesn’t work so well and maybe you want something else instead. Maybe the expressive power of self-referential theories isn’t so bad after all, although working with these theories it’s pretty helpful if you can work out a few layers of self-reference before trying to collapse it because otherwise you definitely can’t hope to notice when you’ve switched between consistency and completeness.
I’m not sure we can really hope to make meta-rationality scrutable.
How about making an “ideological Turing test”? If rationalists could successfully pretend to be meta-rationalists, would that count as a refutation of the claim that meta-rationalists understand things that are beyond understanding of mere rationalists?
Or is even this just a rationalist-level reasoning that from a meta-rationalist point of view makes about as much sense as a hypothetical pre-rationalist asking rationalists to produce a superior horoscope?
The point about meta-rationality as Chapman describes is that it isn’t a single point of view.
If you wanted to run an ideological Turing test on Chapman you would need to account for his Tantra Buddhist background. Most rationalists would likely fail an ideological Turing test for Tantra Buddhism but that wouldn’t mean much for the core thesis.
At first I was going to say “yes” to your idea, but with the caveat that the only folks I’d trust to judge this are other folks we’d agree are meta-rationalists. But then this sort of defeats the point, doesn’t it, because I already believe rationalists couldn’t do this and if they did it would in fact be evidence that even if they don’t call themselves meta-rationalists I would say they have thought processes similar to those who do call themselves meta-rationalists.
“Rationalist” and “meta-rationalist” are mostly categories for describing stochastic categories around the complexity of thinking people do. No one properly is or is not a rationalist or meta-rationalist, but instead can at best be sufficiently well described as one.
I don’t mean this to be wily: I think what you are asking for (and the entire idea of an “ideological Turning test” itself) confounds causality in ways that make it only seem to work from rationalist-level reasoning. From my perspective the taking on of another’s perspective in this test is already incorporated into meta-rationalist-level reasoning and so is not really a test of meta-rationality in the same way a “logical argument test” would be meaningless to a rationalist but a powerful tool for more complex thought for the pre-rationalist.
Maybe the short version of this is: meta-rationalists can’t do what rationalists ask, but that’s okay because neither can rationalists perform the analogous task for pre-rationalists, so asking meta-rationality to be explicable in terms of rationality is epistemically unfair and asking for too much proof.
Hello everyone, I am from USA. I am here to share this good news to only those who will seize this opportunity.I read a post about an ATM hacker and I contacted him via the email address that was attached in the post. I paid the required sum of money for the blank card I wanted and he sent the card through UPS Express Delivery Shipment, and I got it in 3days. I got it from him last week and now I have withdrew $50,000 USD. The blank ATM card is programmed in a way that it can withdraw money from any ATM machine around the world. Now I have so much money to put of my bills with the help of the card. I am really
happy I met Mr.Esa Perez who helped me with this card because I’ve heard about this card long ago but I had no means of getting it until I came across Mr. Esa Perez. To contact him, you can send him a mail or visit his website.
Email: unlimitedblankatmcard@gmail.com
Website: http//:unlimitedblankatmcard.webs.com
I believed the inscrutability was intentional; a Dark Arts technique where no one can refute my position if no one can agree on what my position actually is. Then all criticism can be summarily dismissed by a courtier’s reply or some mystical ad-hominem like “you are too low on Kegan scale to even understand what we are saying.”
Fixing this would be a huge improvement. It would make rational discussion actually possible.
I am quite sure there are things that have Kolmogorov complexity larger than the size of my brain. As a trivial example, random strings of sufficiently large size.
But for the record, “there is something you don’t understand” doesn’t necessarily imply “therefore, I am right”. Neither does “there is some thing you don’t understand” necessarily imply “and it is this specific thing”.
Speaking for myself, I would mostly like to have some assurance that what you are trying to explain is not one of the following:
the Straw Vulcans are actually not rational;
people who identify as “rationalists” are still stupid humans and make a lot of stupid human mistakes;
some people who self-identify as “rationalists” are actually quite embarrassing;
and generally, merely applying a label on oneself does not make a human more rational;
some things cannot be modeled properly by too simple (e.g. linear) equations, because they have many important details;
armchair reasoning is not a substitute for actual empirical data;
(...and some other similar stuff I forgot now...)
Because if it happens to be one of those, then we already have an agreement, we just seem not to have the common knowledge that we already have the agreement.
And if it really is something else… then we have an agreement that a simpler explanation for the simple creatures too low on the Kegan scale would really be helpful.
.
Frankly, this all seems to me, from outside, like another status move. First you have a group that claims higher status by being inscrutable. And it works, just like it worked for many other groups. But then a problem happens: other people can play the inscrutability game, too.
You could try competing with them, but you actually don’t want to go too far in that direction. You want to stay within the shouting distance from rationalists; far enough to keep higher status, not too far to become irrelevant. If there only was a way to have your cake and eat it, too… Oh, there is one! You can do the same status move again, and counter-signal deep wisdom by being less inscrutable.
Congratulations on becoming the world’s first meta-meta-rationalist!
But this view does not make me appreciate your move less. Zero-sum status games within a group can still produce externalities for the rest of the world (for example, when rich people decide to compete by donating to charity instead of buying expensive cars). Looking forward to the positive externalities of meta-meta-rationality in form of insightful and easy-to-understand articles. If they are good, I will just call them “rationality” in my head. ;)
try “there is no one-size-fits-all epistemology”. With a side of “no one-size-fits-all smartness”.
That issue, if it is an issue, reduplicates itself with rationalists versus pre-rationalists.
Except for the no-one-size-fits-all-epistemology epistemology with a flavor of Westernized Buddhism, I guess.
This is such a cheap trick I wonder why people keep falling for it: “Other people have epistemologies. I don’t have an epistemology. I have a meta-epistemology!” “Other people have beliefs. I don’t have beliefs. I have meta-beliefs!” “Other people use strategies. I don’t have a strategy. I have a meta-strategy!” “Other people use algorithms. I don’t use an algorithm. I use a meta-algorithm!” “Other people try to be rational. I don’t try to be rational. I try to be meta-rational!”
But this is not how it works. For certain definitions, meta-X is still a subset of X; you don’t get beyond X by saying “But I am meta!”. Universal Turing machine is still a Turing machine. A compiler or an interpreter is still a program. An algorithm which tries several algorithms and chooses the one which seems to work best, is still an algorithm. Saying “meta” is not a get-out-of-jail-free card. If a statement is true for all algorithms, it is also true for the “algorithm that tries several algorithms”; there is no free lunch.
Similarly, saying: “I don’t have an epistemology; instead I have several epistemologies, and I use different ones in different situations” is a kind of epistemology. Also, some important details are swept under the rug, for example: How do you choose which epistemology is appropriate for which situation? How do you choose which epistemologies to use at all? How do you create new epistemologies? How do you decide whether the existing ones need to be updated or even discarded? “I don’t have a system, I have multiple systems.” Yeah, but then you also need a system of systems, and that is a system. Closing your eyes does not make it go away.
And for others, it isn’t.
Theoretically, but there is no such algorithm.
But it’s not a single algorithmic epistemology.
How do you do anything for whcih there isn’t an algorithm? You use experience, intuition, and other system 1 stuff.
It isn’t in all cases. There is a genuine problem in telling whether a claim of radically superior knowledge is genuine, You can;t round them all off to fraud.
I would be content in just someone saying: “this person is a meta-rationalist, and this is what s/he has achieved”.
I think the idea of “meta-rationality” is that evaluating hypotheses using Bayes’ rule isn’t the limiting factor for humans. The hard part is coming up with good hypotheses. For that you need to make your mind a bit crazy and free, in a way that’s hard to describe in Bayesian terms. That applies to both science and art, and LW doesn’t really equip you for it.
If common sense, and its associated logic, could answer the questions it poses, there would never have been philosophy, or a desire for it. The classic philosophical questions like “where did everything come from” cannot be answered by common sense, and therefore the answer, whatever it is, must be crazy, in common sense terms.
To take an analogy, mathematicians could not answer every question in terms of integers, the counting numbers of common sense, so they had to invent fractions, and then real numbers, and eventually complex numbers, which include the notoriously crazy “imaginary” numbers.
And then there is physics...the subject that gave us, thanks to Wolfgang Pauli, the phrase “not crazy enough to be true”.
And physics gave us physicalism, the default philosophy of sensible, uncrazy philosophers. What can that tell us about the origins of the world, the nature of consciousness, or how one should live ones life? If you are lucky, the answer will be “consult science”..if not it will be “don’t ask that question”. Denial, dissolution and deflation are the fate of anything that doesn’t fit physicalism’s Procrustean bed. Consciousness is an illusion, ethics merely subjective, and so on.
Yeah, many people in the rationalist community are too eager to dissolve words and miss out on interesting thoughts as a result. It’s an unfortunate habit, but thankfully some of the most productive people in the community don’t share it. For example, Nick Bostrom, whose philosophy is pretty much LW, is interested in the nature of consciousness. Wei Dai, who created UDT, is interested in metaethics and doesn’t dismiss it. And so on. I think such “steelmanning” of philosophical ideas in search of new hypotheses is more productive than “tabooing” them. That said, we should still be wary of sliding into woo, and LW ideas are a good antidote for that.
It’s usually the case that the rank and file are a lot worse than the leaders.
I’ve read the article and then also A first lesson in meta-rationality but I must confess I still have no idea what he’s talking about. The accusations of inscrutability seem to be spot on.
Perhaps I should read more about meta-rationality to get it, but just to keep me motivated, can anyone explain in simple terms what the deal is about, or perhaps give me an example of meta-rationalist belief that rationalists don’t share?
I’d say the biggest different you’ll notice that affects the most things is the change in epistemology.
Rationalist epistemology and the epistemology of other similar “rational” systems of thought (cf. scientism, theology) assumes there is a single correct way of understanding the world, which rationalists perhaps having the high ground in viewing the project as finding the correct epistemology regardless of what it implies.
The meta-rationalists/post-modern position is that this is not possible because epistemology necessarily influences ontology so we cannot possibly have a single “correct” understanding of the world. In this view an epistemology and the ontology it produces can at best be useful to some telos (purpose) but we cannot assign one the prime position as the “correct” ontology for metaphysical reality because we have no way to decide what “correct” is that is independent of the epistemology in which we develop our understanding of “correct”. Thus the epistemology of rationality, which seems to target most accurately predicting reality based on known information, is but one useful way of understanding the world within the meta-rationalist/post-modern view, and others may be more useful for serving other purposes.
Both stand in contrast to the pre-rational approach to epistemology which does not assume everything is knowable and will accept mystery where explanation is not available.
Not sure if that really achieves the “simple terms” aim, so maybe I can put it like this:
The pre-rational person can’t know some things. The rational person doesn’t know some things. The meta-rational person knows they can’t know some things.
Thank you, this is a pretty clear explanation. I did read a bit more from meaningness.com yesterday and what I gathered was also pointing in the direction of this sort of meta-epistemological relativism.
However, I still don’t really see a significant disagreement. The map/territory distinction, which I see as one of the key ideas of rationalism, seems to be exactly about this. So I see rationalism as saying “the map is not the territory and you never have unmediated access to the territory but you can make maps that are more or less useful in different contexts for different purposes; here are some tools for mapmaking and updating, and btw, the maps produced by science are great for most purposes, so we mostly use those and build new ones on top of them”.
So with what I learned so far, if I try to formulate the disagreement, it would probably be something like this:
Rationalists: sure, no map is objectively true, but those science maps work really well for most purposes and a lot of people are working on improving them; everyone else would better invest their time by building on top of science maps; also bayesian updating is the best method for deciding how to update your map.
Meta-rationalists: yeah, science maps are pretty awesome, but non-scientific maps work rather well for some people too, so we should pay attention to those as well; bayesian updating is great, but that’s the easy part—the hard part is formulating the hypotheses.
I’m not sure if I’m capturing most of the disagreement here, but at least this part seems to be more about different priorities rather than fundamentally different world views. So there’s no “quantum leap”, that is promised by meta-rationalists, or am I missing something?
There is no such thing as a too little molehill to make a mountain out of. But there are at least two things I noticed you missed here:
First, your description of rationalists is too charitable. On meta-rationalist websites they are typically described as unable to reason about systems, not understanding that their map is not the territory, prone to wishful thinking, and generally as what we call “Vulcan rationalists”. (Usually with a layer of plausible deniability, e.g. on one page it is merely said that rationalists are a subset of “eternalists”, with a hyperlink to other page that describes “eternalists” as having the aforementioned traits. Each of these claims can be easily defended separately, considering that “eternalists” is a made-up word.) With rationalists defined as this, it is easy to see how the other group is superior.
Second, you miss the implication that people disagreeing with meta-rationality are just immature children. There is a development scale from 0 to 5, where meta-rationalists are at level 5, rationalists are at level 4, and everyone else is at some of the lower levels.
Another way to express this is the concept of fluidity/nebulosity/whatever, which works like this: You make a map, and place everyone you know as some specific point on this map. (You can then arrange them into groups, etc.) The important part is that you refuse to place yourself on this map; instead you insist that you are always freely choosing the appropriate point to use in given situation, this getting all the advantages and none of the disadvantages; while everyone else is just hopelessly stuck at their one point. This obviously makes you the coolest guy in the town—of course until someone else comes with their map, where you get stuck at one specific point, and they get to be the one above the map. (In some sense, this is what Eliezer also tried with his “winning” and “nameless virtue”, only to get reduced to “meh, Kegan level 4” regardless.)
While I am sad you’ve gotten this impression of what we’re here calling meta-rationality, I also don’t have a whole lot to say to convince you otherwise. We have often been foolish when first exploring these ideas and write about them in ways that do have status implications and I think we’ve left a bad taste in everyone’s mouths over it, plus there’s an echo of the second-hand post-modernists’ tendency to view themselves as better than everyone else (although to be fair this is nothing new in intellectualism; just the most recent version of it that has a similar form).
That said, I do want to address one point you bring up because it might be a misunderstanding of the meta-rationalist position.
I’m not sure who thinks they have this degree of freedom, but the genesis of the meta-rationalist epistemology is that the map is part of the territory and is thus the map is constrained by the territory and not by an external desire for correspondence or anything else. Thus where we are in the territory greatly influences the kind of map we can draw, to the point that we cannot even hope to draw what we might call an ideal map because all maps will necessarily carry assumptions imposed by the place of observation.
This doesn’t mean that we can always choose whatever perspective to use in a given situation, but rather that we must acknowledge the non-primacy of any particular perspective (unless we impose a purpose against which to judge) and can then, from our relatively small part of the territory from which we can observe to draw our map, use information provided to us by the map to reasonably simulate how the map would look if we could view the territory from a different place and then update our map based on this implied information.
To me it seems rationalists/scientists/theologians/etc. are the ones who have the extra degree of freedom because, although from the inside they restrict themselves to a particular perspective judged on some desirable criteria, those criteria are chosen without being fully constrained, and thus between individuals there is no mechanism of consensus if their preferences disagree. But I understand that from the rationalist perspective this probably looks reversed because by taking the thing that creates different perspectives and puts it in the map a seemingly fundamental preference disagreement becomes part of the perspective.
I think there are plenty of things in LW rationality that point to meta-rationality, and I think that’s why we’re engaged with this community and many people have come to the meta-rationality position through LW rationality (hence even why it’s being called that among other names like post-rationality). That said, interacting with many rationalists (or if we were all being more humble what we might call aspiring rationalists) and talking to them they express having at most episteme of ideas around “winning” and “nameless virtue” and not gnosis. The (aspiring) meta-rationalists are claiming they do have gnosis here, though to be fair we’re mostly offering doxia as evidence because we’re still working on having episteme ourselves.
This need not be true of all self-identified rationalists, of course, but if we are trying to make a distinction between views people seem to hold within the rationalist discourse and “rationalist” is the self-identification term used by many people on one side the the distinction, then choosing another name for those of us who wish to identify on the other side seems reasonable. I myself now try to avoid categorization of people and instead focus on categorization of thought in the language I use to describe these ideas, although I’ve not done that here to remain anchored on the terms already in use in this discussion. I instead like to talk about people thinking in particular ways that the limits those ways of thinking have since we don’t make our thinking, so to speak, but our thinking makes us. This better reflects the way I actually think about these concepts, but unfortunately the most worked out ideas in meta-rational discourse are not evenly distributed yet.
Thank you for more bits of information that answer my original question in this thread. You have my virtual upvote :)
After reading a bit more about meta-rationality and observing how my perspective changes when I try to think this way, I’ve come to an opinion that the “disagreement on priorities”, as I have originally called it, is more significant than I originally acknowledged.
To give an example, if one adopts the science-based map (SBM) as the foundation of their thinking for most practical purposes and only checks the other maps when the SBM doesn’t work (or when modelling other people), they will see the world differently from a person who routinely tries to adopt multiple different perspectives when exploring every problem they face. Even though technically their world views are the same, the different priorities (given that both have bounded computational resources) will lead them to exploring different parts of the solution space and potentially finding different insights. The differences can accumulate through updating in different directions, so, at least in theory, their world views can drift apart to a significant degree.
Again, even though I see this idea as being part (or a trivial consequence) of LW-rationality, focusing your attention on how your map is influenced by where you are in the territory gives new insights.
So my current take aways are: as rationalists that agree with meta-rationalists on (meta-)epistemological foundations we should consider updating our epistemological priorities in the direction that they are advocating; if we can figure out ways to formulate meta-rationalist ideas in a less inscrutable way with less nebulosity, we should do so—it will benefit everyone; we should look into what meta-rationalists have to say about creativity / hypothesis generation—perhaps it will help with formulating a general high level theory of creative thinking (and if we do it in a way that’s precise enough to be programmed into computers, that would be pretty significant).
Ok, I think I get it. So basically, pissing contests put aside, meta-rationalists should probably just concede that LW-style rationalists are also meta-rational and have a constructive discussion about better ways of thinking (I’ve actually seen a bit of this, for example in the comments to this post).
Judging from the tone of your comment, I gather that that’s the opposite of what many of them are doing. Well, that doesn’t really surprise me, but it’s kind of sad.
This is how it seems to me. I may be horribly wrong, of course. But the comments on what you linked...
...are similar to how I often feel. It’s like the meta-rationalists are saying “rationalists are stupid because they don’t see X, Y, Z”, and I am like “but I agree with X, Y, Z, and at least two of them are actually mentioned in the Sequences, so why did you have to start with an assumption that rationalists obviously must be stupid?”
(I had a colleague at one job who always automatically assumed that other people were idiots, so whenever someone was talking about something this colleague knew about, he interrupted him with: “That is wrong. Here is how it actually is: .” And a few times other people were like: “Hey, but you just repeated in different words what he was already saying before your interrupted him!” The guy probably didn’t notice, because he wasn’t paying attention.)
I am aware of my own hostility in this debate, but it is quite difficult for me to be charitable towards someone who pretty much defines themselves as “better than you” (the “meta-” prefix), proceeds with strawmanning you and refuses to update, and concludes that they are morally superior to you (the Kegan scale). Neither of this seems like an evidence that the other side is open to cooperation.
It’s not important whether someone can tell you about how the map isn’t the territory when you ask them, the important thing is how they reason in practice.
You are steelmanning the rationalist position; many rationalists do say, either explicitly or implicitly, that there is one true map, which they more or less identify with the territory, and that they have it.
That could very well be. I had an impression that meta-rationalists are arguing against a strawman, but that would just mean we disagree about the definition of “rationalist position”.
I agree that one-true-map rationalism is rather naive and that there are many people who hold this position, but I haven’t seen much of this on LW. Actually, LW contains the clearest description of the map/territory relationship that I’ve seen, no nebulosity or any of that stuff.
For me, the philosophical implications of: “there is no one true map” was the first quantum leap. How is this statement not a big deal?
That is probably the first post of that site that doesn’t make me want to pull my hair out.
I am too trying to create a taxonomy of the antithetical approaches to rationality.
So far, I’ve classified them in two dimension: the existence of an objective truth (yes/no) and the fallibility of some part of human understanding (yes/no).
On the other hand, I don’t feel that some political movements are against the method of rationality, as much as they are against the actual content of what said method has discovered. If one had the patience and if an alt-righter (say) had “climate change is a hoax” as a true objection, I believe that one could theoretically arrive at an agreement on what are the facts and what is a fair interpretation. Basically, I would just chuck political movements and their associated moral panic as generic tribalism.
I wholeheartedly agree on the last sentence. I also believe that to ‘overcome’ rationality, one needs transfinite computation, so good luck with that.
Why are you trying to create a taxonomy of the antithetical approaches to rationality? What would you do with that once you had it?
I’m not opposed, mind, I just don’t see the use.
Just to locate it amongst the possible systematic approaches to life, that might serve as an introduction—for the general public—to what rationality is and how to practice it, to be put for example in a book or in a Youtube channel.
(cross-posting my comment on this from the original because i think it might be of more interest here)
I might write a more detailed response along these lines depending on where my thinking takes me, but I’ve previously thought about this issue and after thinking about it more since reading this yesterday it still seems to me that meta-rationality is specifically inscrutable because it needs meta-rationality to explain itself.
In fairness this is also a problem for rationality, too, because it can’t really explain itself in terms of pre-rationality, and from what I can tell we actually don’t know that well how to teach rationality either. STEM education mostly seems to teach some of the methods of rationality, like how to use logic to manipulate symbols, but tends to do so in a way that ends up domain restricted. Most STEM graduates are still pre-rational thinkers in most domains of their lives, though they may dress up their thoughts in the language of rationality, and this is specifically what projects like LessWrong are all about: getting people to at least be actually rational rather than pre-rational in rationalist garb.
But even with CFAR and other efforts LW seems to be only marginally more successful than most because I know a lot of LW/CFAR folks who have read, written, and thought about rationality a lot and they still struggle with many of the basics to not only adopt the rationalist world view but to also at least stop using the pre-rationalist world view and instead notice they don’t understand something. To be fair marginal success is all LW needed to achieve to satisfy its goals of producing a supply of people capable of doing AI safety research, but I think it’s telling that even such a project so directed to making rationality learnable has only been marginally successful and from what I can tell not by making rationality scrutable but by creating lots of opportunities for enlightenment.
Given that we don’t even have a good model of how to make rationality truly scrutable, I’m not sure we can really hope to make meta-rationality scrutable. What seems to me more likely is that we can work to find ways of not explaining meta-rationality but training people into it. Of course this is already what you’re doing with Meaningness, but it’s also for this reason I’m not sure we can do more than what Meaningness has so far been working to accomplish.
But if meta-rationality is unscrutable for rationality, how do you know it even exists? At least, Bayesian rationalists have some solace in Cox’s theorem, or the coherence theorem, or the Church-Turing thesis. What stops me from declaring there’s a sigma-rationality, which is unscrutable by all n-rationality below them? What does meta-rationality even imply, for the real world?
You are taking the inscrutability thing a bit too strongly. Someone can transition from rationality to meta rationality, just as someone can transition to pre rationality to rationality. Pre rationality has limitations because yelling tribal slogans at people who aren’t in your tribe doesn’t work.
What does rationality imply? You can’t actually run Solomonoff Induction, so FAPP you are stuck with a messy plurality of approximations. And if you notice that problem...
You see the holes rationality doesn’t fill and the variables it doesn’t constrain and then you go looking for how you could fill them in and constrain them.
Nothing. We are basically saying we’re in the position of applying a theory of types to ontology and meta-rationality is just one layer higher than rationality. We could of course go on forever, but being bounded that’s not an option. There is of course some kind of meta-meta-rationality ontological type and on up for any n, but working with it is another matter.
But once you realize you’re in this position you notice that type theory doesn’t work so well and maybe you want something else instead. Maybe the expressive power of self-referential theories isn’t so bad after all, although working with these theories it’s pretty helpful if you can work out a few layers of self-reference before trying to collapse it because otherwise you definitely can’t hope to notice when you’ve switched between consistency and completeness.
How about making an “ideological Turing test”? If rationalists could successfully pretend to be meta-rationalists, would that count as a refutation of the claim that meta-rationalists understand things that are beyond understanding of mere rationalists?
Or is even this just a rationalist-level reasoning that from a meta-rationalist point of view makes about as much sense as a hypothetical pre-rationalist asking rationalists to produce a superior horoscope?
The point about meta-rationality as Chapman describes is that it isn’t a single point of view.
If you wanted to run an ideological Turing test on Chapman you would need to account for his Tantra Buddhist background. Most rationalists would likely fail an ideological Turing test for Tantra Buddhism but that wouldn’t mean much for the core thesis.
At first I was going to say “yes” to your idea, but with the caveat that the only folks I’d trust to judge this are other folks we’d agree are meta-rationalists. But then this sort of defeats the point, doesn’t it, because I already believe rationalists couldn’t do this and if they did it would in fact be evidence that even if they don’t call themselves meta-rationalists I would say they have thought processes similar to those who do call themselves meta-rationalists.
“Rationalist” and “meta-rationalist” are mostly categories for describing stochastic categories around the complexity of thinking people do. No one properly is or is not a rationalist or meta-rationalist, but instead can at best be sufficiently well described as one.
I don’t mean this to be wily: I think what you are asking for (and the entire idea of an “ideological Turning test” itself) confounds causality in ways that make it only seem to work from rationalist-level reasoning. From my perspective the taking on of another’s perspective in this test is already incorporated into meta-rationalist-level reasoning and so is not really a test of meta-rationality in the same way a “logical argument test” would be meaningless to a rationalist but a powerful tool for more complex thought for the pre-rationalist.
Maybe the short version of this is: meta-rationalists can’t do what rationalists ask, but that’s okay because neither can rationalists perform the analogous task for pre-rationalists, so asking meta-rationality to be explicable in terms of rationality is epistemically unfair and asking for too much proof.
Hello everyone, I am from USA. I am here to share this good news to only those who will seize this opportunity.I read a post about an ATM hacker and I contacted him via the email address that was attached in the post. I paid the required sum of money for the blank card I wanted and he sent the card through UPS Express Delivery Shipment, and I got it in 3days. I got it from him last week and now I have withdrew $50,000 USD. The blank ATM card is programmed in a way that it can withdraw money from any ATM machine around the world. Now I have so much money to put of my bills with the help of the card. I am really happy I met Mr.Esa Perez who helped me with this card because I’ve heard about this card long ago but I had no means of getting it until I came across Mr. Esa Perez. To contact him, you can send him a mail or visit his website. Email: unlimitedblankatmcard@gmail.com Website: http//:unlimitedblankatmcard.webs.com