I’m not sure we’d need anything that elaborate. The rationalist community isn’t that big. I was more thinking that rationalists could self-nominate their expertise, or that a couple of people could come together and nominate someone if they notice that that person is has gone in depth with the topic.
I’ve previously played with the idea of more elaborate schemes, including tests, track records and in-depth arguments. But of course the more elaborate the scheme, the more overhead there is, and I’m not sure that much overhead is affordable or worthwhile if one just wants to figure stuff out.
I agree. We could afford more overhead if we had thousands of rationalists active on the Q&A site. Realistically, we will be lucky if we get twenty.
But some kind of verification would be nice, to prevent the failure mode of “anyone who creates an account is automatically considered a rationalist”. Similarly, if people simply declare their own expertise, it gives more exposure to overconfident people.
How to achieve this as simply as possible?
One idea is to have a network of trust. Some people (e.g. all employees of MIRI and CFAR) would automatically be considered “rationalists”; other people become “rationalists” only if three existing rationalists vouch for them. (The vouch can be revoked or added at any moment. It is evaluated recursively, so if you lose the flag, the people you vouched for might lose their flags too, unless they already have three other people vouching for them.) There is a list of skills, but you can only upvote or downvote other people having a skill; if you get three votes, the skill is displayed next to your name (tooltip shows the people who upvoted it, so if you say something stupid, they can be called out).
This would be the entire mechanism. The meta debate could be in special LW threads, or perhaps in shortform, you could post there e.g. “I am an expert on X, could someone please confirm this? you can interview me by Zoom”, or you could call out other people’s misleading answers, etc.
I’m not sure we’d need anything that elaborate. The rationalist community isn’t that big. I was more thinking that rationalists could self-nominate their expertise, or that a couple of people could come together and nominate someone if they notice that that person is has gone in depth with the topic.
I’ve previously played with the idea of more elaborate schemes, including tests, track records and in-depth arguments. But of course the more elaborate the scheme, the more overhead there is, and I’m not sure that much overhead is affordable or worthwhile if one just wants to figure stuff out.
I agree. We could afford more overhead if we had thousands of rationalists active on the Q&A site. Realistically, we will be lucky if we get twenty.
But some kind of verification would be nice, to prevent the failure mode of “anyone who creates an account is automatically considered a rationalist”. Similarly, if people simply declare their own expertise, it gives more exposure to overconfident people.
How to achieve this as simply as possible?
One idea is to have a network of trust. Some people (e.g. all employees of MIRI and CFAR) would automatically be considered “rationalists”; other people become “rationalists” only if three existing rationalists vouch for them. (The vouch can be revoked or added at any moment. It is evaluated recursively, so if you lose the flag, the people you vouched for might lose their flags too, unless they already have three other people vouching for them.) There is a list of skills, but you can only upvote or downvote other people having a skill; if you get three votes, the skill is displayed next to your name (tooltip shows the people who upvoted it, so if you say something stupid, they can be called out).
This would be the entire mechanism. The meta debate could be in special LW threads, or perhaps in shortform, you could post there e.g. “I am an expert on X, could someone please confirm this? you can interview me by Zoom”, or you could call out other people’s misleading answers, etc.