I’m curious as to hear an example or two of what sort of experiments you had in mind (and the models they’d be testing) when writing this article. A brief, none-too-thorough attempt on my own part kept hitting a couple of walls. I agree with your sentiment that simple surveys may fall prey to sampling biases, and I wonder how we would acquire the resources to conduct experiments methodologically sound enough that their results would be significant, the way CFAR is doing them, with randomized controlled trials and the like.
I’m curious as to hear an example or two of what sort of experiments you had in mind (and the models they’d be testing) when writing this article.
Luminosity is an example of a sequence that I liked but thought of as something that has many predictions that need testing . Methods of teaching rationality also seem like an obvious area for gain anyone who is interesting in that may be interested in a project I started to tackle this.
I agree with your sentiment that simple surveys may fall prey to sampling biases, and I wonder how we would acquire the resources to conduct experiments methodologically sound enough that their results would be significant, the way CFAR is doing them, with randomized controlled trials and the like.
I implicitly argue that CFAR would be better of doing this kind of thing too. We are at the dawn of an era of computational social science, transitioning to a more data driven model seems promising. If that is the course they pick I see no reason to make a strong distinction between CFAR and the wider LessWrong community.
We can raise funds via kickstarter or just regular donations to cover expenses, as was done in the example of Citizen Genetics. Competitions like the quantified health prize could be used by CFAR to pick which of the community proposed projects it would like to happen and perhaps fund it. For things like processing time and bandwith you already have LWers willing to give it away for free.
We wouldn’t need that many resources. A website of the yourmorals.org kind (that is used for academic research) is something well within our reach (YourBrain, YourReasoning, RURational?). As to the cognitive resources, don’t we have enough Bayesians and people with basic programming knowledge that given resources like common crawl we could come up with interesting research? That we have many AI experts on the site only means we also have the know how to tackle big data.
I’m curious as to hear an example or two of what sort of experiments you had in mind (and the models they’d be testing) when writing this article. A brief, none-too-thorough attempt on my own part kept hitting a couple of walls. I agree with your sentiment that simple surveys may fall prey to sampling biases, and I wonder how we would acquire the resources to conduct experiments methodologically sound enough that their results would be significant, the way CFAR is doing them, with randomized controlled trials and the like.
Luminosity is an example of a sequence that I liked but thought of as something that has many predictions that need testing . Methods of teaching rationality also seem like an obvious area for gain anyone who is interesting in that may be interested in a project I started to tackle this.
I implicitly argue that CFAR would be better of doing this kind of thing too. We are at the dawn of an era of computational social science, transitioning to a more data driven model seems promising. If that is the course they pick I see no reason to make a strong distinction between CFAR and the wider LessWrong community.
We can raise funds via kickstarter or just regular donations to cover expenses, as was done in the example of Citizen Genetics. Competitions like the quantified health prize could be used by CFAR to pick which of the community proposed projects it would like to happen and perhaps fund it. For things like processing time and bandwith you already have LWers willing to give it away for free.
We wouldn’t need that many resources. A website of the yourmorals.org kind (that is used for academic research) is something well within our reach (YourBrain, YourReasoning, RURational?). As to the cognitive resources, don’t we have enough Bayesians and people with basic programming knowledge that given resources like common crawl we could come up with interesting research? That we have many AI experts on the site only means we also have the know how to tackle big data.