Here is my attempt to summarize “what are the meta-rationalists trying to tell to rationalists”, as I understood it from the previous discussion, this article, and some articles linked by this article, plus some personal attempts to steelman:
1) Rationalists have a preference for living in far mode, that is studying things instead of experiencing things. They may not endorse this preference explicitly, they may even verbally deny it, but this is what they typically do. It is not a coincidence that so many rationalists complain about akrasia; motivation resides in near mode, which is where rationalists spend very little time. (And the typical reaction of a rationalist facing akrasia is: “I am going to read yet another article or book about ‘procrastination equation’; hopefully that will teach me how to become productive!” which is like trying to become fit by reading yet another book on fitness.) At some moment you need to stop learning and start actually doing things, but rationalists usually find yet another excuse for learning a bit more, and there is always something more to learn. They even consider this approach a virtue.
Rationalists are also more likely to listen to people who got their knowledge from studying, as opposed to people who got their knowledge by experience. Incoming information must at least pretend to be scientific, or it will be dismissed without second thought. In theory, one should update on all available evidence (although not equally strongly), and not double-count any. In practice, one article containing numbers or an equation will always beat unlimited amounts of personal experience.
2) Despite admitting verbally that a map is not the territory, rationalists hope that if they take one map, and keep updating it long enough, this map will asymptotically approach the territory. In other words, that in every moment, using one map is the right strategy. Meta-rationalists don’t believe in the ability to update one map sufficiently (or perhaps just sufficiently quickly), and intentionally use different maps for different contexts. (Which of course does not prevent them from updating the individual maps.) As a side effect of this strategy, the meta-rationalist is always aware that the currently used map is just a map; one of many possible maps. The rationalist, having invested too much time and energy into updating one map, may find it emotionally too difficult to admit that the map does not fit the territory, when they encounter a new part of territory where the existing map fits poorly. Which means that on the emotional level, rationalists treat their one map as the territory.
Furthermore, meta-rationalists don’t really believe that if you take one map and keep updating it long enough, you will necessarily asymptotically approach the territory. First, the incoming information is already interpreted by the map in use; second, the instructions for updating are themselves contained in the map. So it is quite possible that different maps, even after updating on tons of data from the territory, would still converge towards different attractors. And even if, hypothetically, given infinite computing power, they would converge towards the same place, it is still possible that they will not come sufficiently close during one human life, or that a sufficiently advanced map would fit into a human brain. Therefore, using multiple maps may be the optimal approach for a human. (Even if you choose “the current scientific knowledge” as one of your starting maps.)
3) There is an “everything of everythings”, exceeding all systems, something like the highest level Tegmark multiverse only much more awesome, which is called “holon”, or God, or Buddha. We cannot approach it in far mode, but we can… somehow… fruitfully interact with it in near mode. Rationalists deny it because their preferred far-mode approach is fruitless here. But you can still “get it” without necessarily being able to explain it by words. Maybe it is actually inexplicable by words in principle, because the only sufficiently good explanation for holon/God/Buddha is the holon/God/Buddha itself. If you “get it”, you become the Kegan-level-5 meta-rationalist, and everything will start making sense. If you don’t “get it”, you will probably construct some Kegan-level-4 rationalist verbal argument for why it doesn’t make sense at all.
Thanks for taking the time to engage in this way! I’m really enjoying discussing these ideas here lately.
1 and 2 are spot on as long as we are keeping in mind we’re talking about patterns in the behavior of LW rationalists and not making categorical claims about ideal rationalists. I don’t think you are; it’s just an important distinction to note that has proven helpful to me to point out so I don’t end up interpreted as saying ridiculous things like “yes, every last damn rationalist is X”.
Your take on 3 is about as good as I’ve got. I continue to try to figure this out because all I’ve got now is a strong hunch that there is some kind of interconnectedness of things that runs so deep you can’t escape it, but my strongest evidence in favor is that it looks like we have to conclude this because all other metaphysics fail to fully account for reality as we find it. But I could easily be wrong because I expect there are more things I don’t notice I’m assuming or there is reason I’ve made that is faulty.
To me #1 and #2 feel decently accurate but #3 maps to little that is in my sphere of thought.
As far as #1 goes, it’s worth pointing out that there are plenty of people in the Kegan 4 rational sense who don’t have any problem with being productive.
As Chapman said, explaining this is really hard and I don’t think I know of a good way either but it might be productive for you to look more of the concept of “relationship”.
It’s an important primitive for me and my general experience from doing Circling with rationalists is that the concept is elusive for most of them.
Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists. Point 3 is more helpful in this regard, but then if anyone made that claim then I would ask to point to what differences does such a behavior imply… I find very hard to believe in something that is both unscrutable and unnoticeable.
Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists
Maybe the distinction is in noticing it enough and doing something about it.. iti is very common to say “yeah, that’s a problem, let’s put it in a box to be dealt with later” and then forget about it .
Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.
“The “controversy” was quite old in 1905. Maxwell’s equations were around since 1862 and Lorentz transformations had been discussed at least since 1887. You are absolutely correct, that Einstein had all the pieces in his hand. What was missing, and what he supplied, was an authoritative verdict over the correct form of classical mechanics. Special relativity is therefor less of a discovery than it is a capping stone explanation put on the facts that were on the table for everyone to see. Einstein, however, saw them more clearly than others. –”
Point 3 is more helpful in this regard, but then if anyone made that claim then I would ask to point to what differences does such a behavior imply… I find very hard to believe in something that is both unscrutable and unnoticeable.
Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.
Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies. It unified both models under one map.
Do you feel that meta-rationalists have a model of intention-implementation and maps generation that is coherent with the naive model of a Bayesian agent? A meta-rationalist is like physicist from the 19th century, that, having noticed the dual nature of light, called himself meta-physicist, because he uses two maps for the phoenomenon of light. Instead the true revolution, quantum mechanics, happened when two conflicting models were united under one explanation.
Inscrutable and unnoticeable to whom?
It’s a degree: the more people have independently access to the phoenomenon, the more confidence I would give to its existence. If it’s only one person and said person cannot communicate it nor behaves any differently… well I would equate its existence to that of the invisible and intangible dragon.
Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies.
I wasn’t making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously. Every Christian has noticed the problem of evil...in the first sense.
the more people have independently access to the phoenomenon, the more confidence I would give to its existence.
You need to distinguish between phenomena (observations, experiences) and explanations. Even something as scientifically respectable as Tegmarks’ multiverse, or MWI, isn’t supposed to be supported by some unique observation, they are supposed to be better explanations, in terms of simplicity, generality, consilience, and so on, of the same data. MWI has to give the same predictions as CI.
If it’s only one person and said person cannot communicate it nor behaves any differently… well I would equate its existence to that of the invisible and intangible dragon.
You also need to distinguish between belief and understanding. Any kind of fundamentally different, new or advanced understanding has to be not completely communicable and comprehensible to the N-1 level, otherwise it would not be fundamentally new. It is somewhere between pointless and impossible to believe in advanced understanding on the basis of faith. Sweepingly rejecting the possibility of advanced understanding proves too much, because PhD maths is advanced understanding compared high school maths, and so on.
You are not being invited to have a faith-like belief in things that are undetectable and incomprehensible to anybody, you are being invited to widen your understanding so that you can see for yourself.
I wasn’t making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously.
Right. Let’s say that there are (at least) three levels of noticing a discrepancy in a model: 1 - noticing, shrugging and moving on 2 - noticing and claiming that it’s important 3 - noticing, claiming that it’s important and create something new about it (‘something’ can be a new institution, a new model, etc.)
We both agree that LW rationalists are mostly at stage 1. We both agree that meta-rationalsts are at level 2. I also claim that meta-rationalists claim to be at level 3, while they are not.
You need to distinguish between phenomena (observations, experiences) and explanations.
This is also right. But at the same time, I haven’t seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn’t trivial.
Any kind of fundamentally different, new or advanced understanding has to be not completely communicable and comprehensible to the N-1 level, otherwise it would not be fundamentally new.
I think this is our true disagreement. I reject your thesis: there is nothing that is inherently mysterious, not even relatively. I think that any idea is either incoherent, comprehensible or infinitely complex. Math is an illustration of this classification: it exists exactly at the level of being comprehensible. We see levels because we break down a lot of complexity in stages, so that you manipulate the simpler levels, and when you get used to them, you start with more complex matters. But the entire raison d’etre of mathematics is that everything is reducible to trivial, it just takes hundreds of pages more. Maybe meta-rationalists have yet to unpack their intuitions: it happens all the time that someone has a genius idea that only later gets unpacked into simpler components. So kudos to the idea of destroying inscrutability (I firmly believe that destroying inscrutability will destroy meta-rationalism), but claiming that something is inherently mysterious… that runs counter epistemic hygiene.
I also claim that meta-rationalists claim to be at level 3, while they are not.
Can you support that? I rather suspect you are confusing new in the historic sense with new-to-rationalists. Bay area rationalism claims to be new, but is in many respects a rehash of old ideas like logical positivism. Likewise, meta rationalism is old, historically.
, I haven’t seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn’t trivial.
Theres a large literature on that sort of subject. Meta rationality is not something Chapman invented a few years ago.
But the entire raison d’etre of mathematics is that everything is reducible to trivial, it just takes hundreds of pages more.
You still have relative inscrutability, because advanced maths isn’t scrutable to everybody.
but claiming that something is inherently mysterious...
Bay area rationalism claims to be new, but is in many respects a rehash of old ideas like logical positivism.
I’m not sure that’s true. CFAR (as one of the institutions of Bay Area rationalism) puts a lot of value on system I and system II being friends.
Even when we just look at rationality!Eliezer, Eliezer argued for the Multiple World Hypothesis in a way that runs counter to logical positivism.
I don’t think MWI is an exception to Eliezer’s other stated views about epistomology. He isn’t naive about epistomology and thinks that the fact that MWI is coherent in some sense is reason to believe in it even when there’s no experiment that could be run to prove it.
Now I understand that we are talking with two completely different frames of reference. When I write about meta-rationalists, I’m specifically referring to Chapman and Gworley and the like. You have obviously a much wider tradition in mind, on which I don’t necessarily have an opinion. Everything I said needs to be restricted to this much smaller context.
On other points of your answer:
yes, there are important antecedents, but also important novelties too;
identification of what you consider to be the relevant corpus of ‘old’ meta-rationality would be appreciated, mainly of deity as a simplifying nontrivial hypothesis;
about inherently mysteriousness, it’s claimed in the linked post of this page, first paragraph: ” I had come to terms with the idea that my thoughts might never be fully explicable”.
Here is my attempt to summarize “what are the meta-rationalists trying to tell to rationalists”, as I understood it from the previous discussion, this article, and some articles linked by this article, plus some personal attempts to steelman:
1) Rationalists have a preference for living in far mode, that is studying things instead of experiencing things. They may not endorse this preference explicitly, they may even verbally deny it, but this is what they typically do. It is not a coincidence that so many rationalists complain about akrasia; motivation resides in near mode, which is where rationalists spend very little time. (And the typical reaction of a rationalist facing akrasia is: “I am going to read yet another article or book about ‘procrastination equation’; hopefully that will teach me how to become productive!” which is like trying to become fit by reading yet another book on fitness.) At some moment you need to stop learning and start actually doing things, but rationalists usually find yet another excuse for learning a bit more, and there is always something more to learn. They even consider this approach a virtue.
Rationalists are also more likely to listen to people who got their knowledge from studying, as opposed to people who got their knowledge by experience. Incoming information must at least pretend to be scientific, or it will be dismissed without second thought. In theory, one should update on all available evidence (although not equally strongly), and not double-count any. In practice, one article containing numbers or an equation will always beat unlimited amounts of personal experience.
2) Despite admitting verbally that a map is not the territory, rationalists hope that if they take one map, and keep updating it long enough, this map will asymptotically approach the territory. In other words, that in every moment, using one map is the right strategy. Meta-rationalists don’t believe in the ability to update one map sufficiently (or perhaps just sufficiently quickly), and intentionally use different maps for different contexts. (Which of course does not prevent them from updating the individual maps.) As a side effect of this strategy, the meta-rationalist is always aware that the currently used map is just a map; one of many possible maps. The rationalist, having invested too much time and energy into updating one map, may find it emotionally too difficult to admit that the map does not fit the territory, when they encounter a new part of territory where the existing map fits poorly. Which means that on the emotional level, rationalists treat their one map as the territory.
Furthermore, meta-rationalists don’t really believe that if you take one map and keep updating it long enough, you will necessarily asymptotically approach the territory. First, the incoming information is already interpreted by the map in use; second, the instructions for updating are themselves contained in the map. So it is quite possible that different maps, even after updating on tons of data from the territory, would still converge towards different attractors. And even if, hypothetically, given infinite computing power, they would converge towards the same place, it is still possible that they will not come sufficiently close during one human life, or that a sufficiently advanced map would fit into a human brain. Therefore, using multiple maps may be the optimal approach for a human. (Even if you choose “the current scientific knowledge” as one of your starting maps.)
3) There is an “everything of everythings”, exceeding all systems, something like the highest level Tegmark multiverse only much more awesome, which is called “holon”, or God, or Buddha. We cannot approach it in far mode, but we can… somehow… fruitfully interact with it in near mode. Rationalists deny it because their preferred far-mode approach is fruitless here. But you can still “get it” without necessarily being able to explain it by words. Maybe it is actually inexplicable by words in principle, because the only sufficiently good explanation for holon/God/Buddha is the holon/God/Buddha itself. If you “get it”, you become the Kegan-level-5 meta-rationalist, and everything will start making sense. If you don’t “get it”, you will probably construct some Kegan-level-4 rationalist verbal argument for why it doesn’t make sense at all.
How well did I do here?
Thanks for taking the time to engage in this way! I’m really enjoying discussing these ideas here lately.
1 and 2 are spot on as long as we are keeping in mind we’re talking about patterns in the behavior of LW rationalists and not making categorical claims about ideal rationalists. I don’t think you are; it’s just an important distinction to note that has proven helpful to me to point out so I don’t end up interpreted as saying ridiculous things like “yes, every last damn rationalist is X”.
Your take on 3 is about as good as I’ve got. I continue to try to figure this out because all I’ve got now is a strong hunch that there is some kind of interconnectedness of things that runs so deep you can’t escape it, but my strongest evidence in favor is that it looks like we have to conclude this because all other metaphysics fail to fully account for reality as we find it. But I could easily be wrong because I expect there are more things I don’t notice I’m assuming or there is reason I’ve made that is faulty.
To me #1 and #2 feel decently accurate but #3 maps to little that is in my sphere of thought.
As far as #1 goes, it’s worth pointing out that there are plenty of people in the Kegan 4 rational sense who don’t have any problem with being productive.
As Chapman said, explaining this is really hard and I don’t think I know of a good way either but it might be productive for you to look more of the concept of “relationship”.
It’s an important primitive for me and my general experience from doing Circling with rationalists is that the concept is elusive for most of them.
Gordon Worley uses the word “relationship” quite a lot in https://medium.com/@gworley3/holonic-integration-927ba21d774b . It’s also important in Tantra.
Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists.
Point 3 is more helpful in this regard, but then if anyone made that claim then I would ask to point to what differences does such a behavior imply… I find very hard to believe in something that is both unscrutable and unnoticeable.
Maybe the distinction is in noticing it enough and doing something about it.. iti is very common to say “yeah, that’s a problem, let’s put it in a box to be dealt with later” and then forget about it .
Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.
“The “controversy” was quite old in 1905. Maxwell’s equations were around since 1862 and Lorentz transformations had been discussed at least since 1887. You are absolutely correct, that Einstein had all the pieces in his hand. What was missing, and what he supplied, was an authoritative verdict over the correct form of classical mechanics. Special relativity is therefor less of a discovery than it is a capping stone explanation put on the facts that were on the table for everyone to see. Einstein, however, saw them more clearly than others. –”
https://physics.stackexchange.com/questions/133366/what-problems-with-electromagnetism-led-einstein-to-the-special-theory-of-relati
Inscrutable and unnoticeable to whom?
Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies. It unified both models under one map. Do you feel that meta-rationalists have a model of intention-implementation and maps generation that is coherent with the naive model of a Bayesian agent?
A meta-rationalist is like physicist from the 19th century, that, having noticed the dual nature of light, called himself meta-physicist, because he uses two maps for the phoenomenon of light. Instead the true revolution, quantum mechanics, happened when two conflicting models were united under one explanation.
It’s a degree: the more people have independently access to the phoenomenon, the more confidence I would give to its existence. If it’s only one person and said person cannot communicate it nor behaves any differently… well I would equate its existence to that of the invisible and intangible dragon.
I wasn’t making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously. Every Christian has noticed the problem of evil...in the first sense.
You need to distinguish between phenomena (observations, experiences) and explanations. Even something as scientifically respectable as Tegmarks’ multiverse, or MWI, isn’t supposed to be supported by some unique observation, they are supposed to be better explanations, in terms of simplicity, generality, consilience, and so on, of the same data. MWI has to give the same predictions as CI.
You also need to distinguish between belief and understanding. Any kind of fundamentally different, new or advanced understanding has to be not completely communicable and comprehensible to the N-1 level, otherwise it would not be fundamentally new. It is somewhere between pointless and impossible to believe in advanced understanding on the basis of faith. Sweepingly rejecting the possibility of advanced understanding proves too much, because PhD maths is advanced understanding compared high school maths, and so on.
You are not being invited to have a faith-like belief in things that are undetectable and incomprehensible to anybody, you are being invited to widen your understanding so that you can see for yourself.
Right. Let’s say that there are (at least) three levels of noticing a discrepancy in a model:
1 - noticing, shrugging and moving on
2 - noticing and claiming that it’s important
3 - noticing, claiming that it’s important and create something new about it (‘something’ can be a new institution, a new model, etc.)
We both agree that LW rationalists are mostly at stage 1. We both agree that meta-rationalsts are at level 2. I also claim that meta-rationalists claim to be at level 3, while they are not.
This is also right. But at the same time, I haven’t seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn’t trivial.
I think this is our true disagreement. I reject your thesis: there is nothing that is inherently mysterious, not even relatively. I think that any idea is either incoherent, comprehensible or infinitely complex.
Math is an illustration of this classification: it exists exactly at the level of being comprehensible. We see levels because we break down a lot of complexity in stages, so that you manipulate the simpler levels, and when you get used to them, you start with more complex matters. But the entire raison d’etre of mathematics is that everything is reducible to trivial, it just takes hundreds of pages more.
Maybe meta-rationalists have yet to unpack their intuitions: it happens all the time that someone has a genius idea that only later gets unpacked into simpler components. So kudos to the idea of destroying inscrutability (I firmly believe that destroying inscrutability will destroy meta-rationalism), but claiming that something is inherently mysterious… that runs counter epistemic hygiene.
Can you support that? I rather suspect you are confusing new in the historic sense with new-to-rationalists. Bay area rationalism claims to be new, but is in many respects a rehash of old ideas like logical positivism. Likewise, meta rationalism is old, historically.
Theres a large literature on that sort of subject. Meta rationality is not something Chapman invented a few years ago.
You still have relative inscrutability, because advanced maths isn’t scrutable to everybody.
Nobody said that.
I’m not sure that’s true. CFAR (as one of the institutions of Bay Area rationalism) puts a lot of value on system I and system II being friends. Even when we just look at rationality!Eliezer, Eliezer argued for the Multiple World Hypothesis in a way that runs counter to logical positivism.
My take is that the LP is the official doctrine, and the MWI is an unwitting exception.
I don’t think MWI is an exception to Eliezer’s other stated views about epistomology. He isn’t naive about epistomology and thinks that the fact that MWI is coherent in some sense is reason to believe in it even when there’s no experiment that could be run to prove it.
He’s naive enough to reinvent LP. And since when was “coherent , therefore true” a precept of his epistemology?
Now I understand that we are talking with two completely different frames of reference.
When I write about meta-rationalists, I’m specifically referring to Chapman and Gworley and the like. You have obviously a much wider tradition in mind, on which I don’t necessarily have an opinion. Everything I said needs to be restricted to this much smaller context.
On other points of your answer:
yes, there are important antecedents, but also important novelties too;
identification of what you consider to be the relevant corpus of ‘old’ meta-rationality would be appreciated, mainly of deity as a simplifying nontrivial hypothesis;
about inherently mysteriousness, it’s claimed in the linked post of this page, first paragraph: ” I had come to terms with the idea that my thoughts might never be fully explicable”.