I’m extremely relieved to hear that you and Vassar are worrying about dilution of rationality, but if all you require is reaching the absolute threshold of competence, you may not be worrying about it enough. I think it’s very possible that the best options available to a group in which the average level of rationality is 9 out of 10 are several times as effective on a per-person basis as the best options available to a group in which the average level of rationality is 8 out of 10.
I am not sure that worrying about the perils of growth to the degree you suggest is wise. Given how difficult it is to separate personal dislikes from competence, it seems to me that having a process to identify and remove specific problems (X is scaring off the ladies, let’s train him or boot him) is much better than trying to optimize the group (I have more fun when Y isn’t there, let’s stop inviting them).
I also suspect this isn’t intended to be an ivory tower coterie, but a growing movement- which means you want all people above minimum competence regardless of their current skill level. If you’ve got that sort of growth atmosphere, you’ll eventually get enough people that you can sort, and your immediate group will have more members of the average quality you want.
I completely agree with this comment. I don’t believe anyone is sufficiently epistemicly rational to have reached the threshold of actual competence, which is roughly 17 orders of magnitude more difficult that reaching the threshold of being known as awesome by your peer group. Thousands of men can work 12 hours a day for many decades without producing as much value as a single clever insight.
Friendly AI isn’t solved, the Singularity Institute has like 4 real researchers and none of them are really working on FAI even if some of them have seemingly clever ideas, some people like Mitchell Porter and Vladimir Nesov etc are working on Friendliness or very related problems but not many and it’s disorganized and no one thinks it’s important to actually address disagreements despite all this talk about how disturbed one should be by disagreement, Less Wrong is probably the most rational forum on the web and yet comments that are flat out wrong get upvoted too much, especially about tricky problems like FAI, et cetera.
We would not know if we were significantly below the necessary level of competence to have an important-in-hindsight insight. Hell, even the Singularity is just the opening of the rabbit hole. We could be missing some important things about the relevant philosophy. As a stupid example, the current common conception of the Singularity is “we fill the universe with utilitronium” which might not be nearly the correct framing in a Tegmark ’verse. Our comparative advantage is epistemic rationality, whether we like it or not. The reductionistic naturalistic cognitivist realist philosophy of Less Wrong is not satisfying even if it’s the best thing we have at the moment. I highly doubt that this is the point at which we can be satisfied with our epistemic ability and start moving lots of cognitive resources to building marginally rational communities. Following the leader doesn’t work without a smart enough leader, and there are no smart leaders (even if there are a few smart people).
With sufficient deference given to the more capable, I don’t see a problem with a lower average.
Maybe you’re worried about the phenomenon of 9s receiving numerically many upvotes from easily-impressed 8s, with 10s’ contributions not being understood as widely. I don’t think this is too much of a concern if people who have the best ideas and judgments improve their teaching.
I’m extremely relieved to hear that you and Vassar are worrying about dilution of rationality, but if all you require is reaching the absolute threshold of competence, you may not be worrying about it enough. I think it’s very possible that the best options available to a group in which the average level of rationality is 9 out of 10 are several times as effective on a per-person basis as the best options available to a group in which the average level of rationality is 8 out of 10.
I am not sure that worrying about the perils of growth to the degree you suggest is wise. Given how difficult it is to separate personal dislikes from competence, it seems to me that having a process to identify and remove specific problems (X is scaring off the ladies, let’s train him or boot him) is much better than trying to optimize the group (I have more fun when Y isn’t there, let’s stop inviting them).
I also suspect this isn’t intended to be an ivory tower coterie, but a growing movement- which means you want all people above minimum competence regardless of their current skill level. If you’ve got that sort of growth atmosphere, you’ll eventually get enough people that you can sort, and your immediate group will have more members of the average quality you want.
I completely agree with this comment. I don’t believe anyone is sufficiently epistemicly rational to have reached the threshold of actual competence, which is roughly 17 orders of magnitude more difficult that reaching the threshold of being known as awesome by your peer group. Thousands of men can work 12 hours a day for many decades without producing as much value as a single clever insight.
Friendly AI isn’t solved, the Singularity Institute has like 4 real researchers and none of them are really working on FAI even if some of them have seemingly clever ideas, some people like Mitchell Porter and Vladimir Nesov etc are working on Friendliness or very related problems but not many and it’s disorganized and no one thinks it’s important to actually address disagreements despite all this talk about how disturbed one should be by disagreement, Less Wrong is probably the most rational forum on the web and yet comments that are flat out wrong get upvoted too much, especially about tricky problems like FAI, et cetera.
We would not know if we were significantly below the necessary level of competence to have an important-in-hindsight insight. Hell, even the Singularity is just the opening of the rabbit hole. We could be missing some important things about the relevant philosophy. As a stupid example, the current common conception of the Singularity is “we fill the universe with utilitronium” which might not be nearly the correct framing in a Tegmark ’verse. Our comparative advantage is epistemic rationality, whether we like it or not. The reductionistic naturalistic cognitivist realist philosophy of Less Wrong is not satisfying even if it’s the best thing we have at the moment. I highly doubt that this is the point at which we can be satisfied with our epistemic ability and start moving lots of cognitive resources to building marginally rational communities. Following the leader doesn’t work without a smart enough leader, and there are no smart leaders (even if there are a few smart people).
With sufficient deference given to the more capable, I don’t see a problem with a lower average.
Maybe you’re worried about the phenomenon of 9s receiving numerically many upvotes from easily-impressed 8s, with 10s’ contributions not being understood as widely. I don’t think this is too much of a concern if people who have the best ideas and judgments improve their teaching.
Except that you need to be careful that that lower average doesn’t result in goal dilution.
Disclaimer: I’m theorizing here; I haven’t actually been to a meetup.