Posting interesting bits of new science to LessWrong won’t helps any of us become more rational. Indeed it will have a net negative effect, since it is distracting and it dilutes LessWrong’s other content. While it’s rational to pay attention to new science this is an outcome of rationality and nothing to do with rationality itself (just as it’s rational to eat nice food where possible, but recommendations of good restaurants aren’t appropriate to LessWrong).
Clearly it’s fine to use bits of pop-sci to illustrate rationality concepts, and of course the scientific method is of great interest to us. But posting random bits of stuff to LessWrong is bad if they don’t bear on rationality. No matter how interesting they are.
I think characterizing this as post as pop-sci (or “random” for that matter) is highly misleading. The level on which it pattern-matches to that category of things is superficial.
This is actually a request for involvement and a calculation of altruistic benefit on a matter that requires some technical knowledge to evaluate. If you were to explicitly argue that time spent on this proposal is unlikely to have a very favorable utility, that’s the kind of information I’m looking for. Interesting is not the point.
Secondly, scientific and engineering challenges are a good way to improve important rationality skills. There are certain considerations that need to be met for this to happen, though, for a topic to be useful in this regard:
It has to be something where most of the necessary information exists so that a fairly coherent Aumann agreement should be reachable. (The hazier the data, the more likely it is to split into a Green/Blue divide, or worse, unanimous groupthink.) Relatedly, it needs to be within the technical ability-to-grasp of the audience.
Value of information must at least appear to be high. (Arguments that it is lower than it appears can be revelatory.)
There needs to be some difference between mainstream opinion/the easy answer and the opinion that the rationalist is arguing for (otherwise you end up with an exercise in conformity which doesn’t actually change any minds).
Finally, while you are making a point I agree with regarding random bits of pop-sci and their lack of a place on LessWrong, please note that by posting it as a reply to my post you are inferentially transferring the properties “random” and “pop-sci” to the article. These are properties that I don’t ascribe to it. While I can see the pattern matching that led to the conclusion, it isn’t as easy to respond to as if you had made the connection in a more explicit manner.
In any case, I do accept responsibility for the writing style issues that led to your reaction. Will attempt to fix.
This is actually a request for involvement and a calculation of altruistic benefit on a matter that requires some technical knowledge to evaluate.
You’re right. I skim read your article and thought it was just about the project, rather than a request for help. I apologise. Oops.
I’m still not sure whether I approve of this kind of post either, I’d much rather that things had a direct bearing on rationality. But I won’t continue to drag on with that discussion here.
Popular science has nothing to do with rationality.
Could you elaborate? That sounds trivially false the way I’m reading it.
Posting interesting bits of new science to LessWrong won’t helps any of us become more rational. Indeed it will have a net negative effect, since it is distracting and it dilutes LessWrong’s other content. While it’s rational to pay attention to new science this is an outcome of rationality and nothing to do with rationality itself (just as it’s rational to eat nice food where possible, but recommendations of good restaurants aren’t appropriate to LessWrong).
Clearly it’s fine to use bits of pop-sci to illustrate rationality concepts, and of course the scientific method is of great interest to us. But posting random bits of stuff to LessWrong is bad if they don’t bear on rationality. No matter how interesting they are.
I think characterizing this as post as pop-sci (or “random” for that matter) is highly misleading. The level on which it pattern-matches to that category of things is superficial.
This is actually a request for involvement and a calculation of altruistic benefit on a matter that requires some technical knowledge to evaluate. If you were to explicitly argue that time spent on this proposal is unlikely to have a very favorable utility, that’s the kind of information I’m looking for. Interesting is not the point.
Secondly, scientific and engineering challenges are a good way to improve important rationality skills. There are certain considerations that need to be met for this to happen, though, for a topic to be useful in this regard:
It has to be something where most of the necessary information exists so that a fairly coherent Aumann agreement should be reachable. (The hazier the data, the more likely it is to split into a Green/Blue divide, or worse, unanimous groupthink.) Relatedly, it needs to be within the technical ability-to-grasp of the audience.
Value of information must at least appear to be high. (Arguments that it is lower than it appears can be revelatory.)
There needs to be some difference between mainstream opinion/the easy answer and the opinion that the rationalist is arguing for (otherwise you end up with an exercise in conformity which doesn’t actually change any minds).
Finally, while you are making a point I agree with regarding random bits of pop-sci and their lack of a place on LessWrong, please note that by posting it as a reply to my post you are inferentially transferring the properties “random” and “pop-sci” to the article. These are properties that I don’t ascribe to it. While I can see the pattern matching that led to the conclusion, it isn’t as easy to respond to as if you had made the connection in a more explicit manner.
In any case, I do accept responsibility for the writing style issues that led to your reaction. Will attempt to fix.
You’re right. I skim read your article and thought it was just about the project, rather than a request for help. I apologise. Oops.
I’m still not sure whether I approve of this kind of post either, I’d much rather that things had a direct bearing on rationality. But I won’t continue to drag on with that discussion here.