Clusters of opinion may be accidental, e.g. many lemmings follow Eliezer Yudkowsky who is correct on three topics and wrong on two. Or some other pundit. I think such accidental correlations will drown out whatever useful signal you were hoping to uncover by factor analysis. It’s a fishy endeavor anyway, smells like determining truth by popular vote spiced up with nifty math. What if all smart people start using your algorithm? You could get some nasty herd effects...
I thought the Amanda Knox test was fascinating, but mostly for the implications it had about rationality, not so much that the fact that this specific convict is in fact innocent.
Things like the shangri-la diet are closer to what I was thinking, since that has potentially huge consequences on its own.
The closet survey is also close to what I had in mind, with a little less emphasis on my #2. It’d also be interesting to see what happens if that that survey was done again, now that we have a better idea of what the shared beliefs are.
It’s a fishy endeavor anyway, smells like determining truth by popular vote spiced up with nifty math. What if all smart people start using your algorithm? You could get some nasty herd effects...
This endeavour is intended to reduce the fishiness of seeking truth by conforming to mainstream opinion (along the lines that Robin advocates). The process Eliezer suggests actually filters out a significant amount of the adverse positive feedback effect.
Fishy endeavor anyway—smells like determining truth by popular vote spiced up with nifty math.
“Determining truth” has connotations with “certainty”, which is at odds with the fact that evidence here is assumed to be weak—something to prime attention, not imprint opinions.
(But I agree that the idea of getting any kind of useful conclusions/info from such a poll doesn’t seem realistic.)
I don’t, especially if you let respondents suggest additional items and incorporated them. The CCC is large and includes things like (probably) the Shangri-La Diet.
Then the gain is not in turning attention to things considered wrong, but more to things that weren’t considered at all. High-quality memetic availability pool allowing to not waste time on false positives. Again, too dramatic an effect to get from a poll, and it’s unclear to what area should the finds be tuned. I’m not at all interested in know that cold fusion is real if counterfactually it is.
Clusters of opinion may be accidental, e.g. many lemmings follow Eliezer Yudkowsky who is correct on three topics and wrong on two. Or some other pundit. I think such accidental correlations will drown out whatever useful signal you were hoping to uncover by factor analysis. It’s a fishy endeavor anyway, smells like determining truth by popular vote spiced up with nifty math. What if all smart people start using your algorithm? You could get some nasty herd effects...
Don’t poll LWers using keys previously posted on by EY (or RH). That would just be silly.
While that would make it harder to distinguish between LW members, that doesn’t mean game over.
If we already expect LW members to be more correct, it still might be usefull to poll LW members about what views they have that are:
1) contrarian 2) on topics that most LW members haven’t thought about very hard 3) important
Using something along the lines of the Amanda Knox litmus test but with no previous posts on it, one presumes?
I thought the Amanda Knox test was fascinating, but mostly for the implications it had about rationality, not so much that the fact that this specific convict is in fact innocent.
Things like the shangri-la diet are closer to what I was thinking, since that has potentially huge consequences on its own.
The closet survey is also close to what I had in mind, with a little less emphasis on my #2. It’d also be interesting to see what happens if that that survey was done again, now that we have a better idea of what the shared beliefs are.
This endeavour is intended to reduce the fishiness of seeking truth by conforming to mainstream opinion (along the lines that Robin advocates). The process Eliezer suggests actually filters out a significant amount of the adverse positive feedback effect.
“Determining truth” has connotations with “certainty”, which is at odds with the fact that evidence here is assumed to be weak—something to prime attention, not imprint opinions.
(But I agree that the idea of getting any kind of useful conclusions/info from such a poll doesn’t seem realistic.)
Edit: after reformulating the method, I changed my mind.
Conclusions, no, but it sure might print out a fascinating list of things to investigate.
I expect that all “things to investigate” you’d find would’ve already been on the radar.
I don’t, especially if you let respondents suggest additional items and incorporated them. The CCC is large and includes things like (probably) the Shangri-La Diet.
Then the gain is not in turning attention to things considered wrong, but more to things that weren’t considered at all. High-quality memetic availability pool allowing to not waste time on false positives. Again, too dramatic an effect to get from a poll, and it’s unclear to what area should the finds be tuned. I’m not at all interested in know that cold fusion is real if counterfactually it is.