LW users will use doctors but are also quite likely to go to uncredentialed smart people for advice. Posts on DIY covid vaccines were extremely well received. I know two community members who had cancer, both of which commissioned private research and feel it led to better outcomes for them (treatment was still done by doctors, but this informed who they saw and what they chose). The covid tag is full of people giving advice that was later vindicated by public health.
LessWrong has thought about this trade-off and definitively come down on the side of “let uncredentialed smart people take a shot”, knowing that those people face a lot of obstacles to doing good work.
The issue is primarily one of signalling. For example, the ratio of medically qualified/unqualified doctors is vastly higher than the ratio of medically qualified/unqualified car owners in Turkey or whatever. Having a PHD is one of the best quick signals of qualification around, but if you happen to know an individual who isn’t a doctor but who has spent years of their life studying some obscure disease (perhaps after being a patient, or they’re autistic and it’s just their Special Interest or whatever), I’m going to value their thoughts on the topic quite highly as well, perhaps even higher than a random doctor whose quality I have not yet had a chance to ascertain.
Exactly this. Also, doctors are supposed to actually heal patients, and get some degree of real world feedback in succeeding or failing to do so. That likely puts them above most academics, who’s feedback is often purely in being published or not, cited or not, by other academics in a circlejerk divorced from reality.
That likely puts them above most academics, who’s feedback is often purely in being published or not, cited or not, by other academics in a circlejerk divorced from reality.
That description could apply to a certain rationality website.
Certainly it could, and at times does. In our defense, however, we do not make our living this way. It’s all too easy for people to push karma around in a circle divorced from reality, but plenty of people feel free to criticize Less Wrong here, as you just neatly demonstrated. There’s a much stronger incentive to follow the party line in academia where dissent, however true or useful, can curtail promotion or even get one fired.
If we were making our living off of karma, your comparison would be entirely apt, and I’d expect to see the quality of discussion drop sharply.
Everything you say is true, and I agree. But lets not discount the pull towards social conformity that karma has, and the effect evaporative cooling of social groups has in terms of radicalizing community norms. You definitely get a lot further here by defending and promoting AI x-risk concerns than by dismissing or ignoring them.
That does tend to happen, yes, which is unfortunate. What would you suggest doing to reduce this tendency? (It’s totally fine if you don’t have a concrete solution of course, these sorts of problems are notoriously hard)
Karma should not be visible to anyone but mods, to whom it serves as a distributed mechanism for catching their attention and not much else. Large threads could use karma to decide which posts to initially display, but for smaller threads comments should be chronological.
People should be encouraged to post anonymously, as I am doing. Unfortunately the LW forum software devs are reverting this capability, which is a step backwards.
Get rid of featured articles and sequences. I mean keep the posts, but don’t feature them prominently on the top of the site. Have an infobar on the side maybe that can be a jumping off point for people to explore curated content, but don’t elevate it to the level of dogma as the current site does.
Encourage rigorous experimentation to verify one’s belief. A position arrived at through clever argumentation is quite possibly worthless. This is a particular vulnerability of this site, which is built around the exchange of words not physical evidence. So a culture needs to be developed which demands empirical investigation of the form “I wondered if X is true, so I did A, B, and C, and this is what happened...”
That was five minutes of thinking on the subject. I’m sure I could probably come up with more.
Ignoring the concerns basically means not participating in any of the AI x-risk threads. I don’t think it would be held against anyone to simply stay out.
Exactly this. It takes a lot of effort to become competent through an unconventional route, and it takes a lot of effort to separate the unqualified competent person from the crank.
You agree that it is the case, as I previously said, that what you are looking for is not generic smartness, but some domain specific thing that substitutes for conventional domain specific knowledge.
Researching a disease that you happen to have is one of them, but is clearly not the same thing as all conquering generic smartness ..such an individual has nothing like the breadth of knowledge an MD has, even if they have more depth in one precise area.
That doesn’t mean there’s anything better. You probably take your medical problems to a doctor, not an unqualified smart person
...are you new here?
LW users will use doctors but are also quite likely to go to uncredentialed smart people for advice. Posts on DIY covid vaccines were extremely well received. I know two community members who had cancer, both of which commissioned private research and feel it led to better outcomes for them (treatment was still done by doctors, but this informed who they saw and what they chose). The covid tag is full of people giving advice that was later vindicated by public health.
LessWrong has thought about this trade-off and definitively come down on the side of “let uncredentialed smart people take a shot”, knowing that those people face a lot of obstacles to doing good work.
Which would be a refutation of my comment if I had said “definitely” instead of “probably”.
The issue is primarily one of signalling. For example, the ratio of medically qualified/unqualified doctors is vastly higher than the ratio of medically qualified/unqualified car owners in Turkey or whatever. Having a PHD is one of the best quick signals of qualification around, but if you happen to know an individual who isn’t a doctor but who has spent years of their life studying some obscure disease (perhaps after being a patient, or they’re autistic and it’s just their Special Interest or whatever), I’m going to value their thoughts on the topic quite highly as well, perhaps even higher than a random doctor whose quality I have not yet had a chance to ascertain.
Exactly this. Also, doctors are supposed to actually heal patients, and get some degree of real world feedback in succeeding or failing to do so. That likely puts them above most academics, who’s feedback is often purely in being published or not, cited or not, by other academics in a circlejerk divorced from reality.
That description could apply to a certain rationality website.
Certainly it could, and at times does. In our defense, however, we do not make our living this way. It’s all too easy for people to push karma around in a circle divorced from reality, but plenty of people feel free to criticize Less Wrong here, as you just neatly demonstrated. There’s a much stronger incentive to follow the party line in academia where dissent, however true or useful, can curtail promotion or even get one fired.
If we were making our living off of karma, your comparison would be entirely apt, and I’d expect to see the quality of discussion drop sharply.
Everything you say is true, and I agree. But lets not discount the pull towards social conformity that karma has, and the effect evaporative cooling of social groups has in terms of radicalizing community norms. You definitely get a lot further here by defending and promoting AI x-risk concerns than by dismissing or ignoring them.
That does tend to happen, yes, which is unfortunate. What would you suggest doing to reduce this tendency? (It’s totally fine if you don’t have a concrete solution of course, these sorts of problems are notoriously hard)
Karma should not be visible to anyone but mods, to whom it serves as a distributed mechanism for catching their attention and not much else. Large threads could use karma to decide which posts to initially display, but for smaller threads comments should be chronological.
People should be encouraged to post anonymously, as I am doing. Unfortunately the LW forum software devs are reverting this capability, which is a step backwards.
Get rid of featured articles and sequences. I mean keep the posts, but don’t feature them prominently on the top of the site. Have an infobar on the side maybe that can be a jumping off point for people to explore curated content, but don’t elevate it to the level of dogma as the current site does.
Encourage rigorous experimentation to verify one’s belief. A position arrived at through clever argumentation is quite possibly worthless. This is a particular vulnerability of this site, which is built around the exchange of words not physical evidence. So a culture needs to be developed which demands empirical investigation of the form “I wondered if X is true, so I did A, B, and C, and this is what happened...”
That was five minutes of thinking on the subject. I’m sure I could probably come up with more.
Ignoring the concerns basically means not participating in any of the AI x-risk threads. I don’t think it would be held against anyone to simply stay out.
https://www.lesswrong.com/posts/X3p8mxE5dHYDZNxCm/a-concrete-bet-offer-to-those-with-short-ai-timelines would be a post arguing against AI x-risk concerns and it has more than three times the karma then any other post published the day it was published.
Well, we were getting paid for karma the other week, so…. (This is mostly a joke; I get that was an April Fool’s thing 🙃)
Exactly this. It takes a lot of effort to become competent through an unconventional route, and it takes a lot of effort to separate the unqualified competent person from the crank.
You agree that it is the case, as I previously said, that what you are looking for is not generic smartness, but some domain specific thing that substitutes for conventional domain specific knowledge.
Researching a disease that you happen to have is one of them, but is clearly not the same thing as all conquering generic smartness ..such an individual has nothing like the breadth of knowledge an MD has, even if they have more depth in one precise area.