A fully meta-rational workplace is still sorta waffly about how the you actually accomplish the thing, but feels like an okay example of showing meta-rationality as “the thing you do when you come up with the rules, procedures and frameworks for (Chapman’s) rational level at the point of facing undifferentiated reality without having any of those yet”.
People have argued that this is still just rationality in the Lesswrong sense, but I think Chapman’s on to something in that the rules, procedures and frameworks layer is very teachable and generally explicable, while the part where you first come up with new frameworks or spontaneously recognize existing frameworks as applying or not applying to a novel situation is also obviously necessary, but much more mysterious both in how you can teach it and exactly which mental motions you go through when doing it.
I don’t think it’s so much that it’s more “mysterious” in how you teach it or how you do it, but it is a large conceptual shift that is difficult to see without a lot of doing.
In physics, everything you do is figuring out how to solve problems. You start with abstract principles like “F=MA” or “lagrangians”, spend entire semesters playing with the implications and how to use it to solve various types of problems, and then have a final with problems on it which you’ve never seen before and have to figure out how to solve it on the fly.
In engineering, it’s all “Here’s the equation for this, now apply it six times fast”. It’s very different conceptually, and if you try to explain the difference between “knowing how to solve the problem” and “knowing how to figure out how to solve the problem”, you’ll notice that it is extremely difficult to explain—not because it’s “mysterious”, just because the engineering professor has no experience “trying to figure out novel problems based on principles we’re taught”, and so he has no pattern to match to. There’s just no place in his brain to put the concept. You really do have to gesture vaguely, and then say “GO DO THINGS YOU DON’T KNOW HOW TO DO”, and guide them to reflect on what they’re doing when they don’t know what they’re doing.
“This is still just ‘rationality’ in the LW sense” is true, and Chapman underestimates how good LWers are at this, but he does have a point that the distinction between “rationality” and “meta-rationality” is an important one and we don’t really have a concept for it here. Therefore we can’t intentionally do things to build meta-rationality the way we could if we were aware of what we were doing and why.
You really do have to gesture vaguely, and then say “GO DO THINGS YOU DON’T KNOW HOW TO DO”, and guide them to reflect on what they’re doing when they don’t know what they’re doing.
This is pretty much what I’m referring as the “mystery”, it’s not that it’s fundamentally obscure, it’s just that the expected contract of teaching of “I will tell you how to do what I expect you to do in clear language” breaks down at this point, and instead you would need to say “I’ve been giving you many examples that work backwards from a point where the problem has already been recognized and a suitable solution context has been recalled or invented. Your actual work is going to involve recognizing problems, matching them with solution contexts, and if you have an advanced position, coming up with new solution frameworks. I have given you very little actionable advice for doing this part because I don’t know how to describe how it’s done and neither do other teachers. I basically hope that your brain somehow picks up on how to do this part on its own after you have worked through many exercises, but it’s entirely possible that it fails to do that for one reason or another and then you might be out of luck for being able to work in this field.” I don’t think I’ve ever seen a teacher actually spell things out like this, and this doesn’t seem to be anything like a common knowledge thing.
That’s definitely a thing too, but I’m saying that “I don’t know how to describe how it’s done” isn’t the only factor here. Often you do know how to describe how it’s done, and you can converse in great detail about exactly how it’s done with others who “get it”, but teaching is difficult because the student lacks the prerequisites concepts so they cannot assemble them.
So for example, say I wanted to teach you how to make an apple pie, but you didn’t know what an apple was. If I’m lucky, I could describe it’s shape/size/color/taste and you’d say “Ah! I know those! We call them something else.”. If I’m unlucky, I say “Well, it’s a round fruit..” and you cut me off to say “Wtf is ‘round’. Also, wtf is ‘fruit’”. At some point, if you just don’t have language at all, I’m going to have to go get an apple and point to it while saying “apple!”. Then I’m going to have to go get another different apple, and point to that too. And then point to a nectarine and shake my head while saying “No apple!”.
It’s not enough to have a language to describe what I’m doing, if you do not share it. I simply cannot describe to you the recipe for making an apple pie until you have a concept for the individual ingredients, so if you don’t have a concept for those, and you don’t have the concepts for the sub-ingredients necessary to build the ingredients, then I’m going to have to start pointing and saying “yes apple” or “no apple” until I’m pretty sure you know what an apple is.
It can get tricky sometimes when people think that they have the necessary ingredients, but actually carve up the conceptual space wrong—especially if the carving is kinda similar in a lot of ways, but lacks the underlying structure that connects it with the pieces you’d like to teach.
It’s tricky because it’s not about a single tree of categories. There are multiple types of relationships (A causes B, A build B, A manages B, A does something to B, A describes B, etc.) which form multiples hierarchies (a holarchy). This means that the space of concepts (objects/categories) is multidimensional, sort of around 10-dimensional (exact number is irrelevant, just to illustrate). As a result, people cannot perceive/imagine the holarchy (and there actually are overlapping and conflicting alternative holarchies) the way they can a category tree. They also can’t easily think about all the dimensions (since their number is bigger than the “magic number”). And there is also no way to really talk about this. System thinking, ontologies, etc. are not good enough at the moment.
What happens as a result is that you need people to spend decades building their own understanding of this world (essentially do something similar to what Chapman calls remodeling—https://metarationality.com/remodeling ). Example of LW understanding of this is in Rationalism before the Sequences https://www.lesswrong.com/posts/qc7P2NwfxQMC3hdgm/?commentId=oZtsFc5oCMA9Zk5Tg , but I would argue that the LW and the rationalist community still hasn’t integrated the existing thinking (all the threads) on the topic, including stuff that has been out since the 1960s. Which is an interesting problem/opportunity.
Similarly, you can get a master’s degree in 2 years of courses, but a PhD requires an extended apprenticeship. The former is to teach you a body of knowledge, the latter is to teach you how to do research to extend that knowledge.
I’d also add that in his first lesson on metarationality, the one that opens with the Bongard problems, Chapman explicitly acknowledges that he is restricting his idea of a system to “a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow,” and that metarationality as he is thinking of it is systematic in that there exist rules you could write down that make it formulaic, but also that said rules would be too cumbersome to enact in this way. At that point, to me, disagreeing feels like arguing about whether “we” “could” “systematically” Chinese Room von Neumann: obviously mathematically yes, but also obviously practically no.
A lot of his later discussion of metarationality hinges on the idea of stances towards modes of thinking and acting and relating, and not holding necessarily and often invisibly imprecise ideas fixed too strongly in the mind. For example, the opening hypothetical conversation of In the Cells of the Eggplant has a very strong “Taboo your words!” vibe. A human’s guide to words features strongly throughout, really, and also the whole idea of stances in thinking as “metarationality” seems to me analogous to the idea of how metaethics is related to ethics. Many of the original sequences hinge on ideas that I think Chapman would characterize as metarational (and I would note that the sequences themselves frequently included EY talking about how something he thought was simple was actually incredibly complicated to convey and required dozens of prerequisites and isn’t something you could just tell someone and have them actually understand, even if you and they both think communication and understanding happened).
A fully meta-rational workplace is still sorta waffly about how the you actually accomplish the thing, but feels like an okay example of showing meta-rationality as “the thing you do when you come up with the rules, procedures and frameworks for (Chapman’s) rational level at the point of facing undifferentiated reality without having any of those yet”.
People have argued that this is still just rationality in the Lesswrong sense, but I think Chapman’s on to something in that the rules, procedures and frameworks layer is very teachable and generally explicable, while the part where you first come up with new frameworks or spontaneously recognize existing frameworks as applying or not applying to a novel situation is also obviously necessary, but much more mysterious both in how you can teach it and exactly which mental motions you go through when doing it.
I don’t think it’s so much that it’s more “mysterious” in how you teach it or how you do it, but it is a large conceptual shift that is difficult to see without a lot of doing.
In physics, everything you do is figuring out how to solve problems. You start with abstract principles like “F=MA” or “lagrangians”, spend entire semesters playing with the implications and how to use it to solve various types of problems, and then have a final with problems on it which you’ve never seen before and have to figure out how to solve it on the fly.
In engineering, it’s all “Here’s the equation for this, now apply it six times fast”. It’s very different conceptually, and if you try to explain the difference between “knowing how to solve the problem” and “knowing how to figure out how to solve the problem”, you’ll notice that it is extremely difficult to explain—not because it’s “mysterious”, just because the engineering professor has no experience “trying to figure out novel problems based on principles we’re taught”, and so he has no pattern to match to. There’s just no place in his brain to put the concept. You really do have to gesture vaguely, and then say “GO DO THINGS YOU DON’T KNOW HOW TO DO”, and guide them to reflect on what they’re doing when they don’t know what they’re doing.
“This is still just ‘rationality’ in the LW sense” is true, and Chapman underestimates how good LWers are at this, but he does have a point that the distinction between “rationality” and “meta-rationality” is an important one and we don’t really have a concept for it here. Therefore we can’t intentionally do things to build meta-rationality the way we could if we were aware of what we were doing and why.
This is pretty much what I’m referring as the “mystery”, it’s not that it’s fundamentally obscure, it’s just that the expected contract of teaching of “I will tell you how to do what I expect you to do in clear language” breaks down at this point, and instead you would need to say “I’ve been giving you many examples that work backwards from a point where the problem has already been recognized and a suitable solution context has been recalled or invented. Your actual work is going to involve recognizing problems, matching them with solution contexts, and if you have an advanced position, coming up with new solution frameworks. I have given you very little actionable advice for doing this part because I don’t know how to describe how it’s done and neither do other teachers. I basically hope that your brain somehow picks up on how to do this part on its own after you have worked through many exercises, but it’s entirely possible that it fails to do that for one reason or another and then you might be out of luck for being able to work in this field.” I don’t think I’ve ever seen a teacher actually spell things out like this, and this doesn’t seem to be anything like a common knowledge thing.
That’s definitely a thing too, but I’m saying that “I don’t know how to describe how it’s done” isn’t the only factor here. Often you do know how to describe how it’s done, and you can converse in great detail about exactly how it’s done with others who “get it”, but teaching is difficult because the student lacks the prerequisites concepts so they cannot assemble them.
So for example, say I wanted to teach you how to make an apple pie, but you didn’t know what an apple was. If I’m lucky, I could describe it’s shape/size/color/taste and you’d say “Ah! I know those! We call them something else.”. If I’m unlucky, I say “Well, it’s a round fruit..” and you cut me off to say “Wtf is ‘round’. Also, wtf is ‘fruit’”. At some point, if you just don’t have language at all, I’m going to have to go get an apple and point to it while saying “apple!”. Then I’m going to have to go get another different apple, and point to that too. And then point to a nectarine and shake my head while saying “No apple!”.
It’s not enough to have a language to describe what I’m doing, if you do not share it. I simply cannot describe to you the recipe for making an apple pie until you have a concept for the individual ingredients, so if you don’t have a concept for those, and you don’t have the concepts for the sub-ingredients necessary to build the ingredients, then I’m going to have to start pointing and saying “yes apple” or “no apple” until I’m pretty sure you know what an apple is.
It can get tricky sometimes when people think that they have the necessary ingredients, but actually carve up the conceptual space wrong—especially if the carving is kinda similar in a lot of ways, but lacks the underlying structure that connects it with the pieces you’d like to teach.
It’s tricky because it’s not about a single tree of categories. There are multiple types of relationships (A causes B, A build B, A manages B, A does something to B, A describes B, etc.) which form multiples hierarchies (a holarchy). This means that the space of concepts (objects/categories) is multidimensional, sort of around 10-dimensional (exact number is irrelevant, just to illustrate). As a result, people cannot perceive/imagine the holarchy (and there actually are overlapping and conflicting alternative holarchies) the way they can a category tree. They also can’t easily think about all the dimensions (since their number is bigger than the “magic number”). And there is also no way to really talk about this. System thinking, ontologies, etc. are not good enough at the moment.
What happens as a result is that you need people to spend decades building their own understanding of this world (essentially do something similar to what Chapman calls remodeling—https://metarationality.com/remodeling ). Example of LW understanding of this is in Rationalism before the Sequences https://www.lesswrong.com/posts/qc7P2NwfxQMC3hdgm/?commentId=oZtsFc5oCMA9Zk5Tg , but I would argue that the LW and the rationalist community still hasn’t integrated the existing thinking (all the threads) on the topic, including stuff that has been out since the 1960s. Which is an interesting problem/opportunity.
Similarly, you can get a master’s degree in 2 years of courses, but a PhD requires an extended apprenticeship. The former is to teach you a body of knowledge, the latter is to teach you how to do research to extend that knowledge.
I’d also add that in his first lesson on metarationality, the one that opens with the Bongard problems, Chapman explicitly acknowledges that he is restricting his idea of a system to “a set of rules that can be printed in a book weighing less than ten kilograms, and which a person can consciously follow,” and that metarationality as he is thinking of it is systematic in that there exist rules you could write down that make it formulaic, but also that said rules would be too cumbersome to enact in this way. At that point, to me, disagreeing feels like arguing about whether “we” “could” “systematically” Chinese Room von Neumann: obviously mathematically yes, but also obviously practically no.
A lot of his later discussion of metarationality hinges on the idea of stances towards modes of thinking and acting and relating, and not holding necessarily and often invisibly imprecise ideas fixed too strongly in the mind. For example, the opening hypothetical conversation of In the Cells of the Eggplant has a very strong “Taboo your words!” vibe. A human’s guide to words features strongly throughout, really, and also the whole idea of stances in thinking as “metarationality” seems to me analogous to the idea of how metaethics is related to ethics. Many of the original sequences hinge on ideas that I think Chapman would characterize as metarational (and I would note that the sequences themselves frequently included EY talking about how something he thought was simple was actually incredibly complicated to convey and required dozens of prerequisites and isn’t something you could just tell someone and have them actually understand, even if you and they both think communication and understanding happened).