(For reference, I upvoted you because it was an interesting insight and I was just trying to draw out more content. But since you offered three question marks to my single question mark it looks like maybe I should spell out what I meant explicitly...)
An information cascade occurs when evidence is counted twice. One person sees something and tells two other people. A fourth person hears the story from each without realizing it was the same story told second hand and counts it as having happened twice. A fifth person hears the second and fourth without realizing the backstory and counts it as happening three times. The first person hears about three events from the fifth person and decides that their original observation must happen all the time. A sixth person figures out the problem but is shouted down by the others because they’ve got a stake in the observation being common. This would be a tragedy, like something out of a play, having to do with knowledge. Hence an epistemic tragedy.
If cryoskeptics are afraid of voicing real objections for fear of downvoting, and thereby deprive people invested in or pondering investment in a cryo-policy (and I guess presuming that the skeptics are a source of signal rather than noise) then the cryo-investers should feel regret at this because they’d be losing the opportunity to escape a costly delusion. The cryo-investor’s own voting would be causing them to live in a kind of bubble and it would imply that aggregate voting policies of the forum are harming the quality of content available on the site and making it a magnifier of confusion rather than a condenser for truth.
This would mean that lesswrong was just fundamentally broken on a subject many people consider to be really important and it would represent a severe indictment of the entire “lesswrong” project. If it was true, it would suggest that perhaps lesswrong should perhaps be fixed… or abandoned as a lost cause… or something? Like maybe voting patterns could be examined and automated weightings systems could fix things… or whatever.
However, a distinct hypothesis (made up on the spot for the sake of example) might be that cryoskeptics are just people who haven’t thought about this stuff very much, who will tend to raise the same old objections that normally well up from specific ignorance plus background common knowledge, leaving the “advocates” to (for example) trot out the application of the Arrhenius equation all over again, educating one more person on one more component in the larger intellectual structure.
Under this hypothesis, the difference between meetups and online discussion might be (again making something up off the top of my head) that in a meetup people feel more comfy expressing ignorance so that it can be rectified because fewer people will hear about it, it won’t become part of the history of the internet, and the people who hear will be able to see their face and do friendly-monkey things based on non-verbal channels unavailable to us in this medium. If this were true it would suggest that lesswrong voting habits aren’t nearly as bad, they aren’t really causing much of an information cascade but are instead detecting and promoting high quality comments. If this second hypothesis were true then if there was a real failure it would, perhaps(?) be a failure of active content generation and promotion aimed at filling in the predictable gaps in knowledge so new people don’t have to pipe up and risk looking unknowledgeable in order to gain that knowledge.
So the thing I was asking was: Given the implicit indictment of LW if there is an information cascade in voting causing selection bias in content generation, can you think of an alternative hypothesis to test the theory, so we can figure out if there is something worth fixing, and if so what. Like maybe the people who express skepticism in meetups have less general scientific background education and tend to be emotionally sensitive, while people who express skepticism online have more education, more brashness, and are saying what they say for other idiosyncratic reasons.
Basically I was just asking: “What else does your interesting and potentially important hypothesis predict? Can it be falsified? How? Say more!” :-)
Spawning new groups may help. For example, the Center for Modern Rationality is unlikely to be quite so strongly seeded by transhumanist subcultural tropes, unless someone deliberately decides doing that would be a good idea.
However, a distinct hypothesis (made up on the spot for the sake of example) might be that cryoskeptics are just people who haven’t thought about this stuff very much, who will tend to raise the same old objections that normally well up from specific ignorance plus background common knowledge, leaving the “advocates” to (for example) trot out the application of the Arrhenius equation all over again, educating one more person on one more component in the larger intellectual structure.
Under this hypothesis, the difference between meetups and online discussion might be (again making something up off the top of my head) that in a meetup people feel more comfy expressing ignorance so that it can be rectified because fewer people will hear about it, it won’t become part of the history of the internet, and the people who hear will be able to see their face and do friendly-monkey things based on non-verbal channels unavailable to us in this medium.
This still looks like the same selection bias to me, cryoskeptics not speaking up online, though potentially for a different reason. I suppose that a well-constructed poll can help clarify the issue somewhat (people participating in a poll are a subject to some selection bias, but likely a different one).
(For reference, I upvoted you because it was an interesting insight and I was just trying to draw out more content. But since you offered three question marks to my single question mark it looks like maybe I should spell out what I meant explicitly...)
An information cascade occurs when evidence is counted twice. One person sees something and tells two other people. A fourth person hears the story from each without realizing it was the same story told second hand and counts it as having happened twice. A fifth person hears the second and fourth without realizing the backstory and counts it as happening three times. The first person hears about three events from the fifth person and decides that their original observation must happen all the time. A sixth person figures out the problem but is shouted down by the others because they’ve got a stake in the observation being common. This would be a tragedy, like something out of a play, having to do with knowledge. Hence an epistemic tragedy.
If cryoskeptics are afraid of voicing real objections for fear of downvoting, and thereby deprive people invested in or pondering investment in a cryo-policy (and I guess presuming that the skeptics are a source of signal rather than noise) then the cryo-investers should feel regret at this because they’d be losing the opportunity to escape a costly delusion. The cryo-investor’s own voting would be causing them to live in a kind of bubble and it would imply that aggregate voting policies of the forum are harming the quality of content available on the site and making it a magnifier of confusion rather than a condenser for truth.
This would mean that lesswrong was just fundamentally broken on a subject many people consider to be really important and it would represent a severe indictment of the entire “lesswrong” project. If it was true, it would suggest that perhaps lesswrong should perhaps be fixed… or abandoned as a lost cause… or something? Like maybe voting patterns could be examined and automated weightings systems could fix things… or whatever.
However, a distinct hypothesis (made up on the spot for the sake of example) might be that cryoskeptics are just people who haven’t thought about this stuff very much, who will tend to raise the same old objections that normally well up from specific ignorance plus background common knowledge, leaving the “advocates” to (for example) trot out the application of the Arrhenius equation all over again, educating one more person on one more component in the larger intellectual structure.
Under this hypothesis, the difference between meetups and online discussion might be (again making something up off the top of my head) that in a meetup people feel more comfy expressing ignorance so that it can be rectified because fewer people will hear about it, it won’t become part of the history of the internet, and the people who hear will be able to see their face and do friendly-monkey things based on non-verbal channels unavailable to us in this medium. If this were true it would suggest that lesswrong voting habits aren’t nearly as bad, they aren’t really causing much of an information cascade but are instead detecting and promoting high quality comments. If this second hypothesis were true then if there was a real failure it would, perhaps(?) be a failure of active content generation and promotion aimed at filling in the predictable gaps in knowledge so new people don’t have to pipe up and risk looking unknowledgeable in order to gain that knowledge.
So the thing I was asking was: Given the implicit indictment of LW if there is an information cascade in voting causing selection bias in content generation, can you think of an alternative hypothesis to test the theory, so we can figure out if there is something worth fixing, and if so what. Like maybe the people who express skepticism in meetups have less general scientific background education and tend to be emotionally sensitive, while people who express skepticism online have more education, more brashness, and are saying what they say for other idiosyncratic reasons.
Basically I was just asking: “What else does your interesting and potentially important hypothesis predict? Can it be falsified? How? Say more!” :-)
Spawning new groups may help. For example, the Center for Modern Rationality is unlikely to be quite so strongly seeded by transhumanist subcultural tropes, unless someone deliberately decides doing that would be a good idea.
I see what you mean now, thanks.
Under this hypothesis, the difference between meetups and online discussion might be (again making something up off the top of my head) that in a meetup people feel more comfy expressing ignorance so that it can be rectified because fewer people will hear about it, it won’t become part of the history of the internet, and the people who hear will be able to see their face and do friendly-monkey things based on non-verbal channels unavailable to us in this medium.
This still looks like the same selection bias to me, cryoskeptics not speaking up online, though potentially for a different reason. I suppose that a well-constructed poll can help clarify the issue somewhat (people participating in a poll are a subject to some selection bias, but likely a different one).