But if meta-rationality is unscrutable for rationality, how do you know it even exists? At least, Bayesian rationalists have some solace in Cox’s theorem, or the coherence theorem, or the Church-Turing thesis. What stops me from declaring there’s a sigma-rationality, which is unscrutable by all n-rationality below them? What does meta-rationality even imply, for the real world?
You are taking the inscrutability thing a bit too strongly. Someone can transition from rationality to meta rationality, just as someone can transition to pre rationality to rationality. Pre rationality has limitations because yelling tribal slogans at people who aren’t in your tribe doesn’t work.
What does meta-rationality even imply, for the real world?
What does rationality imply? You can’t actually run Solomonoff Induction, so FAPP you are stuck with a messy plurality of approximations. And if you notice that problem...
But if meta-rationality is unscrutable for rationality, how do you know it even exists?
You see the holes rationality doesn’t fill and the variables it doesn’t constrain and then you go looking for how you could fill them in and constrain them.
What stops me from declaring there’s a sigma-rationality, which is unscrutable by all n-rationality below them?
Nothing. We are basically saying we’re in the position of applying a theory of types to ontology and meta-rationality is just one layer higher than rationality. We could of course go on forever, but being bounded that’s not an option. There is of course some kind of meta-meta-rationality ontological type and on up for any n, but working with it is another matter.
But once you realize you’re in this position you notice that type theory doesn’t work so well and maybe you want something else instead. Maybe the expressive power of self-referential theories isn’t so bad after all, although working with these theories it’s pretty helpful if you can work out a few layers of self-reference before trying to collapse it because otherwise you definitely can’t hope to notice when you’ve switched between consistency and completeness.
But if meta-rationality is unscrutable for rationality, how do you know it even exists? At least, Bayesian rationalists have some solace in Cox’s theorem, or the coherence theorem, or the Church-Turing thesis. What stops me from declaring there’s a sigma-rationality, which is unscrutable by all n-rationality below them? What does meta-rationality even imply, for the real world?
You are taking the inscrutability thing a bit too strongly. Someone can transition from rationality to meta rationality, just as someone can transition to pre rationality to rationality. Pre rationality has limitations because yelling tribal slogans at people who aren’t in your tribe doesn’t work.
What does rationality imply? You can’t actually run Solomonoff Induction, so FAPP you are stuck with a messy plurality of approximations. And if you notice that problem...
You see the holes rationality doesn’t fill and the variables it doesn’t constrain and then you go looking for how you could fill them in and constrain them.
Nothing. We are basically saying we’re in the position of applying a theory of types to ontology and meta-rationality is just one layer higher than rationality. We could of course go on forever, but being bounded that’s not an option. There is of course some kind of meta-meta-rationality ontological type and on up for any n, but working with it is another matter.
But once you realize you’re in this position you notice that type theory doesn’t work so well and maybe you want something else instead. Maybe the expressive power of self-referential theories isn’t so bad after all, although working with these theories it’s pretty helpful if you can work out a few layers of self-reference before trying to collapse it because otherwise you definitely can’t hope to notice when you’ve switched between consistency and completeness.