(cross-posting my comment on this from the original because i think it might be of more interest here)
I might write a more detailed response along these lines depending on where my thinking takes me, but I’ve previously thought about this issue and after thinking about it more since reading this yesterday it still seems to me that meta-rationality is specifically inscrutable because it needs meta-rationality to explain itself.
In fairness this is also a problem for rationality, too, because it can’t really explain itself in terms of pre-rationality, and from what I can tell we actually don’t know that well how to teach rationality either. STEM education mostly seems to teach some of the methods of rationality, like how to use logic to manipulate symbols, but tends to do so in a way that ends up domain restricted. Most STEM graduates are still pre-rational thinkers in most domains of their lives, though they may dress up their thoughts in the language of rationality, and this is specifically what projects like LessWrong are all about: getting people to at least be actually rational rather than pre-rational in rationalist garb.
But even with CFAR and other efforts LW seems to be only marginally more successful than most because I know a lot of LW/CFAR folks who have read, written, and thought about rationality a lot and they still struggle with many of the basics to not only adopt the rationalist world view but to also at least stop using the pre-rationalist world view and instead notice they don’t understand something. To be fair marginal success is all LW needed to achieve to satisfy its goals of producing a supply of people capable of doing AI safety research, but I think it’s telling that even such a project so directed to making rationality learnable has only been marginally successful and from what I can tell not by making rationality scrutable but by creating lots of opportunities for enlightenment.
Given that we don’t even have a good model of how to make rationality truly scrutable, I’m not sure we can really hope to make meta-rationality scrutable. What seems to me more likely is that we can work to find ways of not explaining meta-rationality but training people into it. Of course this is already what you’re doing with Meaningness, but it’s also for this reason I’m not sure we can do more than what Meaningness has so far been working to accomplish.
But if meta-rationality is unscrutable for rationality, how do you know it even exists? At least, Bayesian rationalists have some solace in Cox’s theorem, or the coherence theorem, or the Church-Turing thesis. What stops me from declaring there’s a sigma-rationality, which is unscrutable by all n-rationality below them? What does meta-rationality even imply, for the real world?
You are taking the inscrutability thing a bit too strongly. Someone can transition from rationality to meta rationality, just as someone can transition to pre rationality to rationality. Pre rationality has limitations because yelling tribal slogans at people who aren’t in your tribe doesn’t work.
What does meta-rationality even imply, for the real world?
What does rationality imply? You can’t actually run Solomonoff Induction, so FAPP you are stuck with a messy plurality of approximations. And if you notice that problem...
But if meta-rationality is unscrutable for rationality, how do you know it even exists?
You see the holes rationality doesn’t fill and the variables it doesn’t constrain and then you go looking for how you could fill them in and constrain them.
What stops me from declaring there’s a sigma-rationality, which is unscrutable by all n-rationality below them?
Nothing. We are basically saying we’re in the position of applying a theory of types to ontology and meta-rationality is just one layer higher than rationality. We could of course go on forever, but being bounded that’s not an option. There is of course some kind of meta-meta-rationality ontological type and on up for any n, but working with it is another matter.
But once you realize you’re in this position you notice that type theory doesn’t work so well and maybe you want something else instead. Maybe the expressive power of self-referential theories isn’t so bad after all, although working with these theories it’s pretty helpful if you can work out a few layers of self-reference before trying to collapse it because otherwise you definitely can’t hope to notice when you’ve switched between consistency and completeness.
I’m not sure we can really hope to make meta-rationality scrutable.
How about making an “ideological Turing test”? If rationalists could successfully pretend to be meta-rationalists, would that count as a refutation of the claim that meta-rationalists understand things that are beyond understanding of mere rationalists?
Or is even this just a rationalist-level reasoning that from a meta-rationalist point of view makes about as much sense as a hypothetical pre-rationalist asking rationalists to produce a superior horoscope?
The point about meta-rationality as Chapman describes is that it isn’t a single point of view.
If you wanted to run an ideological Turing test on Chapman you would need to account for his Tantra Buddhist background. Most rationalists would likely fail an ideological Turing test for Tantra Buddhism but that wouldn’t mean much for the core thesis.
At first I was going to say “yes” to your idea, but with the caveat that the only folks I’d trust to judge this are other folks we’d agree are meta-rationalists. But then this sort of defeats the point, doesn’t it, because I already believe rationalists couldn’t do this and if they did it would in fact be evidence that even if they don’t call themselves meta-rationalists I would say they have thought processes similar to those who do call themselves meta-rationalists.
“Rationalist” and “meta-rationalist” are mostly categories for describing stochastic categories around the complexity of thinking people do. No one properly is or is not a rationalist or meta-rationalist, but instead can at best be sufficiently well described as one.
I don’t mean this to be wily: I think what you are asking for (and the entire idea of an “ideological Turning test” itself) confounds causality in ways that make it only seem to work from rationalist-level reasoning. From my perspective the taking on of another’s perspective in this test is already incorporated into meta-rationalist-level reasoning and so is not really a test of meta-rationality in the same way a “logical argument test” would be meaningless to a rationalist but a powerful tool for more complex thought for the pre-rationalist.
Maybe the short version of this is: meta-rationalists can’t do what rationalists ask, but that’s okay because neither can rationalists perform the analogous task for pre-rationalists, so asking meta-rationality to be explicable in terms of rationality is epistemically unfair and asking for too much proof.
(cross-posting my comment on this from the original because i think it might be of more interest here)
I might write a more detailed response along these lines depending on where my thinking takes me, but I’ve previously thought about this issue and after thinking about it more since reading this yesterday it still seems to me that meta-rationality is specifically inscrutable because it needs meta-rationality to explain itself.
In fairness this is also a problem for rationality, too, because it can’t really explain itself in terms of pre-rationality, and from what I can tell we actually don’t know that well how to teach rationality either. STEM education mostly seems to teach some of the methods of rationality, like how to use logic to manipulate symbols, but tends to do so in a way that ends up domain restricted. Most STEM graduates are still pre-rational thinkers in most domains of their lives, though they may dress up their thoughts in the language of rationality, and this is specifically what projects like LessWrong are all about: getting people to at least be actually rational rather than pre-rational in rationalist garb.
But even with CFAR and other efforts LW seems to be only marginally more successful than most because I know a lot of LW/CFAR folks who have read, written, and thought about rationality a lot and they still struggle with many of the basics to not only adopt the rationalist world view but to also at least stop using the pre-rationalist world view and instead notice they don’t understand something. To be fair marginal success is all LW needed to achieve to satisfy its goals of producing a supply of people capable of doing AI safety research, but I think it’s telling that even such a project so directed to making rationality learnable has only been marginally successful and from what I can tell not by making rationality scrutable but by creating lots of opportunities for enlightenment.
Given that we don’t even have a good model of how to make rationality truly scrutable, I’m not sure we can really hope to make meta-rationality scrutable. What seems to me more likely is that we can work to find ways of not explaining meta-rationality but training people into it. Of course this is already what you’re doing with Meaningness, but it’s also for this reason I’m not sure we can do more than what Meaningness has so far been working to accomplish.
But if meta-rationality is unscrutable for rationality, how do you know it even exists? At least, Bayesian rationalists have some solace in Cox’s theorem, or the coherence theorem, or the Church-Turing thesis. What stops me from declaring there’s a sigma-rationality, which is unscrutable by all n-rationality below them? What does meta-rationality even imply, for the real world?
You are taking the inscrutability thing a bit too strongly. Someone can transition from rationality to meta rationality, just as someone can transition to pre rationality to rationality. Pre rationality has limitations because yelling tribal slogans at people who aren’t in your tribe doesn’t work.
What does rationality imply? You can’t actually run Solomonoff Induction, so FAPP you are stuck with a messy plurality of approximations. And if you notice that problem...
You see the holes rationality doesn’t fill and the variables it doesn’t constrain and then you go looking for how you could fill them in and constrain them.
Nothing. We are basically saying we’re in the position of applying a theory of types to ontology and meta-rationality is just one layer higher than rationality. We could of course go on forever, but being bounded that’s not an option. There is of course some kind of meta-meta-rationality ontological type and on up for any n, but working with it is another matter.
But once you realize you’re in this position you notice that type theory doesn’t work so well and maybe you want something else instead. Maybe the expressive power of self-referential theories isn’t so bad after all, although working with these theories it’s pretty helpful if you can work out a few layers of self-reference before trying to collapse it because otherwise you definitely can’t hope to notice when you’ve switched between consistency and completeness.
How about making an “ideological Turing test”? If rationalists could successfully pretend to be meta-rationalists, would that count as a refutation of the claim that meta-rationalists understand things that are beyond understanding of mere rationalists?
Or is even this just a rationalist-level reasoning that from a meta-rationalist point of view makes about as much sense as a hypothetical pre-rationalist asking rationalists to produce a superior horoscope?
The point about meta-rationality as Chapman describes is that it isn’t a single point of view.
If you wanted to run an ideological Turing test on Chapman you would need to account for his Tantra Buddhist background. Most rationalists would likely fail an ideological Turing test for Tantra Buddhism but that wouldn’t mean much for the core thesis.
At first I was going to say “yes” to your idea, but with the caveat that the only folks I’d trust to judge this are other folks we’d agree are meta-rationalists. But then this sort of defeats the point, doesn’t it, because I already believe rationalists couldn’t do this and if they did it would in fact be evidence that even if they don’t call themselves meta-rationalists I would say they have thought processes similar to those who do call themselves meta-rationalists.
“Rationalist” and “meta-rationalist” are mostly categories for describing stochastic categories around the complexity of thinking people do. No one properly is or is not a rationalist or meta-rationalist, but instead can at best be sufficiently well described as one.
I don’t mean this to be wily: I think what you are asking for (and the entire idea of an “ideological Turning test” itself) confounds causality in ways that make it only seem to work from rationalist-level reasoning. From my perspective the taking on of another’s perspective in this test is already incorporated into meta-rationalist-level reasoning and so is not really a test of meta-rationality in the same way a “logical argument test” would be meaningless to a rationalist but a powerful tool for more complex thought for the pre-rationalist.
Maybe the short version of this is: meta-rationalists can’t do what rationalists ask, but that’s okay because neither can rationalists perform the analogous task for pre-rationalists, so asking meta-rationality to be explicable in terms of rationality is epistemically unfair and asking for too much proof.