Second-Order Rationality, System Rationality, and a feature suggestion for LessWrong
Second-order rationality
Definition
By “second-order rationality” (and intelligence) I mean the study of rationally reasoning about other people’s rationality (and intelligence) in order to inform us about the world.
This is opposed to evaluating a proposition at face value, using the first-order evidence supporting it.
Second-order rationality is about updating your beliefs just from understanding the distribution of different beliefs or belief histories, possibly by grouping them across populations with different characteristics, without referring to any first-order evidence related to the nature of the belief.
I think it’s an area worth exploring more.
What’s your probability that this is a useful area to study? You can use your own operationalization. For this exercise to work, you should record your prediction before continuing to read this article. This will get used as an example of system rationality at the end of this post.
The Elicit integrations aren’t working. I’m looking into it; it looks like we attempted to migrate away from the Elicit API 7 months ago and make the polls be self-hosted on LW, but left the UI for creating Elicit polls in place in a way where it would produce broken polls. Argh.
I can find the polls this article uses, but unfortunately I can’t link to them; Elicit’s question-permalink route is broken? Here’s what should have been a permalink to the first question: link.