David Chapman has issued something of a challenge to those of us thinking in the space of what he calls the meta-rational, many people call the post-modern, and I call the holonic. He thinks we can and should be less opaque, more comprehensible, and less inscrutable (specifically less inscrutable to rationalism and rationalists).
Ignorant, irrelevant, and inscrutable
I have changed my mind. It should go without saying that rationality is better than irrationality. But now I realize…meaningness.com
I’ve thought about this issue a lot. My previous blogging project hit a dead end when I reached the point of needing to explain holonic thinking. Around this time I contracted obscurantism and spent several months only sharing my philosophical writing with a few people on Facebook in barely decipherable stream-of-consciousness posts. But during this time I also worked on developing a pedagogy, manifested in a self-help book, that would allow people to follow in my footsteps even if I couldn’t explain my ideas. That project produced three things: an unpublished book draft, one mantra of advice, and a realization that the way can only be walked, not followed. So when I returned to blogging here on Medium my goal was not to be deliberately obscure, but also not to be reliably understood. I had come to terms with the idea that my thoughts might never be fully explicable, but I could at least still write for those without too much dust in their eyes.
The trouble is that holonic thought is necessarily inscrutable without the use of holons, and history shows this makes it very difficult to teach or explain holonic thinking to others. For example, the first wave of post-modernists like Foucault, Derrida, and Lyotard applied Heidegger’s phenomenological epistemology to develop complex, multi-faceted understandings of history, literature, and academic culture. Unfortunately they did this in an environment of high modernism where classical rationalism was taken for granted, so they failed to notice they were building off the strengths of modernism even as they derided its weaknesses. Consequently they focused so much on teaching the subjectivity of experience that they forgot to impress that it was subjective experience of an external reality and left their students with an intellectual tradition now widely regarded as useless for anything other than status signaling.
In comparison Buddhist traditions have, to the extent that bodhi is synonymous with meta-rationality and holonic thinking, done a better job of teaching post-modernism than the post-modernists did. Indic philosophical traditions hit upon post-modern ideas at least as early as the Axial Age and they became central to Buddhist philosophy around 200 CE. I’d argue the sutras of Buddhism do no better than the texts of the post-modernists at teaching holon-level thinking, but over the centuries Buddhist schools also developed tantric instruction that created environments in which practitioners were able to play and later work with holons. It appears to me these “esoteric” techniques tapped in to the same developmental psychology operating in personal growth and in doing so created lineages that provide paths to holon-level thinking that people traverse to this day.
I suspect the key differentiator of the experiential learning of personal growth and tantra from the textual learning of academia and sutra is the focus on gnosis over episteme, and this suggests why the meta-rational is inscrutable from rationalism, but I’ll do one better and prove it. To do that it will suffice to show that there exists at least one meta-rational idea that cannot be made scrutable to rationalism. I choose meta-rational epistemology.
Rational/modern/system-relationship epistemology aims to be consistent and complete, meaning it produces a complete ontology. To the extent that there is any disagreement over system-relationship epistemology it is disagreement over how to compute correct ontology. Meta-rational/post-modern/holonic epistemology denies the possibility of a complete ontology via consistent epistemology because epistemology necessarily influences ontology. That is to say, even if some epistemology is consistent, it cannot be complete because it cannot prove its own consistency, thus no consistent epistemology can produce complete ontology. Instead we might have a complete but inconsistent meta-epistemology that chooses between consistent epistemologies in different situations based on telos, like a desire for correspondence to reality or telling an interesting story, but telos asks us to make an axiological evaluation, not an epistemic one, and thus we are forced to admit that even our consistent and complete meta-epistemology needs a free variable, hence cannot actually be complete.
In this way holonic epistemology is necessarily inscrutable to system-relationship epistemology because it explicitly demands the latter do something it explicitly cannot. To be fair, system-relationship epistemology does the same thing to pre-rational/traditional/system epistemology by demanding consistency that the latter cannot tolerate because it would violate its internal completeness, but I think this is infrequently acknowledged because if you grew up in the shadow of the modern world you probably didn’t notice when modernity demanded this of you. And unless you learned to ignore the problem, the modern world constantly gives you opportunities to experience the system-relationship level of complexity and obtain gnosis of it. But obtaining gnosis of the post-modern and holonic seems to require that a great tragedy befall you or that you have enough dedication to tolerate the pain of finding it, so beyond better building the episteme of holons for those few with more than doxa of them, I’m doubtful being less inscrutable will accomplish much of what Chapman seems to hope it will.
Here is my attempt to summarize “what are the meta-rationalists trying to tell to rationalists”, as I understood it from the previous discussion, this article, and some articles linked by this article, plus some personal attempts to steelman:
1) Rationalists have a preference for living in far mode, that is studying things instead of experiencing things. They may not endorse this preference explicitly, they may even verbally deny it, but this is what they typically do. It is not a coincidence that so many rationalists complain about akrasia; motivation resides in near mode, which is where rationalists spend very little time. (And the typical reaction of a rationalist facing akrasia is: “I am going to read yet another article or book about ‘procrastination equation’; hopefully that will teach me how to become productive!” which is like trying to become fit by reading yet another book on fitness.) At some moment you need to stop learning and start actually doing things, but rationalists usually find yet another excuse for learning a bit more, and there is always something more to learn. They even consider this approach a virtue.
Rationalists are also more likely to listen to people who got their knowledge from studying, as opposed to people who got their knowledge by experience. Incoming information must at least pretend to be scientific, or it will be dismissed without second thought. In theory, one should update on all available evidence (although not equally strongly), and not double-count any. In practice, one article containing numbers or an equation will always beat unlimited amounts of personal experience.
2) Despite admitting verbally that a map is not the territory, rationalists hope that if they take one map, and keep updating it long enough, this map will asymptotically approach the territory. In other words, that in every moment, using one map is the right strategy. Meta-rationalists don’t believe in the ability to update one map sufficiently (or perhaps just sufficiently quickly), and intentionally use different maps for different contexts. (Which of course does not prevent them from updating the individual maps.) As a side effect of this strategy, the meta-rationalist is always aware that the currently used map is just a map; one of many possible maps. The rationalist, having invested too much time and energy into updating one map, may find it emotionally too difficult to admit that the map does not fit the territory, when they encounter a new part of territory where the existing map fits poorly. Which means that on the emotional level, rationalists treat their one map as the territory.
Furthermore, meta-rationalists don’t really believe that if you take one map and keep updating it long enough, you will necessarily asymptotically approach the territory. First, the incoming information is already interpreted by the map in use; second, the instructions for updating are themselves contained in the map. So it is quite possible that different maps, even after updating on tons of data from the territory, would still converge towards different attractors. And even if, hypothetically, given infinite computing power, they would converge towards the same place, it is still possible that they will not come sufficiently close during one human life, or that a sufficiently advanced map would fit into a human brain. Therefore, using multiple maps may be the optimal approach for a human. (Even if you choose “the current scientific knowledge” as one of your starting maps.)
3) There is an “everything of everythings”, exceeding all systems, something like the highest level Tegmark multiverse only much more awesome, which is called “holon”, or God, or Buddha. We cannot approach it in far mode, but we can… somehow… fruitfully interact with it in near mode. Rationalists deny it because their preferred far-mode approach is fruitless here. But you can still “get it” without necessarily being able to explain it by words. Maybe it is actually inexplicable by words in principle, because the only sufficiently good explanation for holon/God/Buddha is the holon/God/Buddha itself. If you “get it”, you become the Kegan-level-5 meta-rationalist, and everything will start making sense. If you don’t “get it”, you will probably construct some Kegan-level-4 rationalist verbal argument for why it doesn’t make sense at all.
How well did I do here?
Thanks for taking the time to engage in this way! I’m really enjoying discussing these ideas here lately.
1 and 2 are spot on as long as we are keeping in mind we’re talking about patterns in the behavior of LW rationalists and not making categorical claims about ideal rationalists. I don’t think you are; it’s just an important distinction to note that has proven helpful to me to point out so I don’t end up interpreted as saying ridiculous things like “yes, every last damn rationalist is X”.
Your take on 3 is about as good as I’ve got. I continue to try to figure this out because all I’ve got now is a strong hunch that there is some kind of interconnectedness of things that runs so deep you can’t escape it, but my strongest evidence in favor is that it looks like we have to conclude this because all other metaphysics fail to fully account for reality as we find it. But I could easily be wrong because I expect there are more things I don’t notice I’m assuming or there is reason I’ve made that is faulty.
To me #1 and #2 feel decently accurate but #3 maps to little that is in my sphere of thought.
As far as #1 goes, it’s worth pointing out that there are plenty of people in the Kegan 4 rational sense who don’t have any problem with being productive.
As Chapman said, explaining this is really hard and I don’t think I know of a good way either but it might be productive for you to look more of the concept of “relationship”.
It’s an important primitive for me and my general experience from doing Circling with rationalists is that the concept is elusive for most of them.
Gordon Worley uses the word “relationship” quite a lot in https://medium.com/@gworley3/holonic-integration-927ba21d774b . It’s also important in Tantra.
Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists.
Point 3 is more helpful in this regard, but then if anyone made that claim then I would ask to point to what differences does such a behavior imply… I find very hard to believe in something that is both unscrutable and unnoticeable.
Maybe the distinction is in noticing it enough and doing something about it.. iti is very common to say “yeah, that’s a problem, let’s put it in a box to be dealt with later” and then forget about it .
Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.
“The “controversy” was quite old in 1905. Maxwell’s equations were around since 1862 and Lorentz transformations had been discussed at least since 1887. You are absolutely correct, that Einstein had all the pieces in his hand. What was missing, and what he supplied, was an authoritative verdict over the correct form of classical mechanics. Special relativity is therefor less of a discovery than it is a capping stone explanation put on the facts that were on the table for everyone to see. Einstein, however, saw them more clearly than others. –”
https://physics.stackexchange.com/questions/133366/what-problems-with-electromagnetism-led-einstein-to-the-special-theory-of-relati
Inscrutable and unnoticeable to whom?
Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies. It unified both models under one map. Do you feel that meta-rationalists have a model of intention-implementation and maps generation that is coherent with the naive model of a Bayesian agent?
A meta-rationalist is like physicist from the 19th century, that, having noticed the dual nature of light, called himself meta-physicist, because he uses two maps for the phoenomenon of light. Instead the true revolution, quantum mechanics, happened when two conflicting models were united under one explanation.
It’s a degree: the more people have independently access to the phoenomenon, the more confidence I would give to its existence. If it’s only one person and said person cannot communicate it nor behaves any differently… well I would equate its existence to that of the invisible and intangible dragon.
I wasn’t making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously. Every Christian has noticed the problem of evil...in the first sense.
You need to distinguish between phenomena (observations, experiences) and explanations. Even something as scientifically respectable as Tegmarks’ multiverse, or MWI, isn’t supposed to be supported by some unique observation, they are supposed to be better explanations, in terms of simplicity, generality, consilience, and so on, of the same data. MWI has to give the same predictions as CI.
You also need to distinguish between belief and understanding. Any kind of fundamentally different, new or advanced understanding has to be not completely communicable and comprehensible to the N-1 level, otherwise it would not be fundamentally new. It is somewhere between pointless and impossible to believe in advanced understanding on the basis of faith. Sweepingly rejecting the possibility of advanced understanding proves too much, because PhD maths is advanced understanding compared high school maths, and so on.
You are not being invited to have a faith-like belief in things that are undetectable and incomprehensible to anybody, you are being invited to widen your understanding so that you can see for yourself.
Right. Let’s say that there are (at least) three levels of noticing a discrepancy in a model:
1 - noticing, shrugging and moving on
2 - noticing and claiming that it’s important
3 - noticing, claiming that it’s important and create something new about it (‘something’ can be a new institution, a new model, etc.)
We both agree that LW rationalists are mostly at stage 1. We both agree that meta-rationalsts are at level 2. I also claim that meta-rationalists claim to be at level 3, while they are not.
This is also right. But at the same time, I haven’t seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn’t trivial.
I think this is our true disagreement. I reject your thesis: there is nothing that is inherently mysterious, not even relatively. I think that any idea is either incoherent, comprehensible or infinitely complex.
Math is an illustration of this classification: it exists exactly at the level of being comprehensible. We see levels because we break down a lot of complexity in stages, so that you manipulate the simpler levels, and when you get used to them, you start with more complex matters. But the entire raison d’etre of mathematics is that everything is reducible to trivial, it just takes hundreds of pages more.
Maybe meta-rationalists have yet to unpack their intuitions: it happens all the time that someone has a genius idea that only later gets unpacked into simpler components. So kudos to the idea of destroying inscrutability (I firmly believe that destroying inscrutability will destroy meta-rationalism), but claiming that something is inherently mysterious… that runs counter epistemic hygiene.
Can you support that? I rather suspect you are confusing new in the historic sense with new-to-rationalists. Bay area rationalism claims to be new, but is in many respects a rehash of old ideas like logical positivism. Likewise, meta rationalism is old, historically.
Theres a large literature on that sort of subject. Meta rationality is not something Chapman invented a few years ago.
You still have relative inscrutability, because advanced maths isn’t scrutable to everybody.
Nobody said that.
I’m not sure that’s true. CFAR (as one of the institutions of Bay Area rationalism) puts a lot of value on system I and system II being friends. Even when we just look at rationality!Eliezer, Eliezer argued for the Multiple World Hypothesis in a way that runs counter to logical positivism.
My take is that the LP is the official doctrine, and the MWI is an unwitting exception.
I don’t think MWI is an exception to Eliezer’s other stated views about epistomology. He isn’t naive about epistomology and thinks that the fact that MWI is coherent in some sense is reason to believe in it even when there’s no experiment that could be run to prove it.
He’s naive enough to reinvent LP. And since when was “coherent , therefore true” a precept of his epistemology?
Now I understand that we are talking with two completely different frames of reference.
When I write about meta-rationalists, I’m specifically referring to Chapman and Gworley and the like. You have obviously a much wider tradition in mind, on which I don’t necessarily have an opinion. Everything I said needs to be restricted to this much smaller context.
On other points of your answer:
yes, there are important antecedents, but also important novelties too;
identification of what you consider to be the relevant corpus of ‘old’ meta-rationality would be appreciated, mainly of deity as a simplifying nontrivial hypothesis;
about inherently mysteriousness, it’s claimed in the linked post of this page, first paragraph: ” I had come to terms with the idea that my thoughts might never be fully explicable”.