This post gave a short name for a way of thinking that I naturally fall into, and implicitly pointing to the possibility of that way of thinking being mistaken. This makes a variety of discussions in the AI alignment space more tractable. I do wish that the post were more precise at characterising the position of ‘realism about rationality’ and its converse, or (even better) that it gave arguments for or against ‘realism about rationality’ (even a priors-based one as in this closely related Robin Hanson post), but pointing to a type of proposition and giving it a name seems very valuable.
Note that the linked technical report by Salamon, Rayhawk, and Kramar does a good job at looking at evidence for and against ‘rationality realism’, or as they call it, ‘the intelligibility of intelligence’.
I do think that it was an interesting choice for the post to be about ‘realism about rationality’ rather than its converse, which the author seems to subscribe to. This probably can be chalked up to it being easier to clearly see a thinking pattern that you don’t frequently use, I guess?
I think in general, if there’s a belief system B that some people have, then it’s much easier and more useful to describe B than ~B. It’s pretty clear if, say, B = Christianity, or B = Newtonian physics. I think of rationality anti-realism less as a specific hypothesis about intelligence, and more as a default skepticism: why should intelligence be formalisable? Most things aren’t!
(I agree that if you think most things are formalisable, so that realism about rationality should be our default hypothesis, then phrasing it this way around might seem a little weird. But the version of realism about rationality that people buy into around here also depends on some of the formalisms that we’ve actually come up with being useful, which is a much more specific hypothesis, making skepticism again the default position.)
I think that rationality realism is to Bayesianism is to rationality anti-realism as theism is to Christianity is to atheism. Just like it’s feasible and natural to write a post advocating and mainly talking about atheism, despite that position being based on default skepticism and in some sense defined by theism, I think it would be feasible and natural to write a post titled ‘rationality anti-realism’ that focussed on that proposition and described why it was true.
This post gave a short name for a way of thinking that I naturally fall into, and implicitly pointing to the possibility of that way of thinking being mistaken. This makes a variety of discussions in the AI alignment space more tractable. I do wish that the post were more precise at characterising the position of ‘realism about rationality’ and its converse, or (even better) that it gave arguments for or against ‘realism about rationality’ (even a priors-based one as in this closely related Robin Hanson post), but pointing to a type of proposition and giving it a name seems very valuable.
Note that the linked technical report by Salamon, Rayhawk, and Kramar does a good job at looking at evidence for and against ‘rationality realism’, or as they call it, ‘the intelligibility of intelligence’.
I do think that it was an interesting choice for the post to be about ‘realism about rationality’ rather than its converse, which the author seems to subscribe to. This probably can be chalked up to it being easier to clearly see a thinking pattern that you don’t frequently use, I guess?
I think in general, if there’s a belief system B that some people have, then it’s much easier and more useful to describe B than ~B. It’s pretty clear if, say, B = Christianity, or B = Newtonian physics. I think of rationality anti-realism less as a specific hypothesis about intelligence, and more as a default skepticism: why should intelligence be formalisable? Most things aren’t!
(I agree that if you think most things are formalisable, so that realism about rationality should be our default hypothesis, then phrasing it this way around might seem a little weird. But the version of realism about rationality that people buy into around here also depends on some of the formalisms that we’ve actually come up with being useful, which is a much more specific hypothesis, making skepticism again the default position.)
I think that rationality realism is to Bayesianism is to rationality anti-realism as theism is to Christianity is to atheism. Just like it’s feasible and natural to write a post advocating and mainly talking about atheism, despite that position being based on default skepticism and in some sense defined by theism, I think it would be feasible and natural to write a post titled ‘rationality anti-realism’ that focussed on that proposition and described why it was true.