Given the replication crisis, blind deference to academic qualifications is absurd. While there are certainly many smart PhDs, a piece of paper from a university does not automatically confer either intelligence or understanding.
LW users will use doctors but are also quite likely to go to uncredentialed smart people for advice. Posts on DIY covid vaccines were extremely well received. I know two community members who had cancer, both of which commissioned private research and feel it led to better outcomes for them (treatment was still done by doctors, but this informed who they saw and what they chose). The covid tag is full of people giving advice that was later vindicated by public health.
LessWrong has thought about this trade-off and definitively come down on the side of “let uncredentialed smart people take a shot”, knowing that those people face a lot of obstacles to doing good work.
The issue is primarily one of signalling. For example, the ratio of medically qualified/unqualified doctors is vastly higher than the ratio of medically qualified/unqualified car owners in Turkey or whatever. Having a PHD is one of the best quick signals of qualification around, but if you happen to know an individual who isn’t a doctor but who has spent years of their life studying some obscure disease (perhaps after being a patient, or they’re autistic and it’s just their Special Interest or whatever), I’m going to value their thoughts on the topic quite highly as well, perhaps even higher than a random doctor whose quality I have not yet had a chance to ascertain.
Exactly this. Also, doctors are supposed to actually heal patients, and get some degree of real world feedback in succeeding or failing to do so. That likely puts them above most academics, who’s feedback is often purely in being published or not, cited or not, by other academics in a circlejerk divorced from reality.
That likely puts them above most academics, who’s feedback is often purely in being published or not, cited or not, by other academics in a circlejerk divorced from reality.
That description could apply to a certain rationality website.
Certainly it could, and at times does. In our defense, however, we do not make our living this way. It’s all too easy for people to push karma around in a circle divorced from reality, but plenty of people feel free to criticize Less Wrong here, as you just neatly demonstrated. There’s a much stronger incentive to follow the party line in academia where dissent, however true or useful, can curtail promotion or even get one fired.
If we were making our living off of karma, your comparison would be entirely apt, and I’d expect to see the quality of discussion drop sharply.
Everything you say is true, and I agree. But lets not discount the pull towards social conformity that karma has, and the effect evaporative cooling of social groups has in terms of radicalizing community norms. You definitely get a lot further here by defending and promoting AI x-risk concerns than by dismissing or ignoring them.
That does tend to happen, yes, which is unfortunate. What would you suggest doing to reduce this tendency? (It’s totally fine if you don’t have a concrete solution of course, these sorts of problems are notoriously hard)
Karma should not be visible to anyone but mods, to whom it serves as a distributed mechanism for catching their attention and not much else. Large threads could use karma to decide which posts to initially display, but for smaller threads comments should be chronological.
People should be encouraged to post anonymously, as I am doing. Unfortunately the LW forum software devs are reverting this capability, which is a step backwards.
Get rid of featured articles and sequences. I mean keep the posts, but don’t feature them prominently on the top of the site. Have an infobar on the side maybe that can be a jumping off point for people to explore curated content, but don’t elevate it to the level of dogma as the current site does.
Encourage rigorous experimentation to verify one’s belief. A position arrived at through clever argumentation is quite possibly worthless. This is a particular vulnerability of this site, which is built around the exchange of words not physical evidence. So a culture needs to be developed which demands empirical investigation of the form “I wondered if X is true, so I did A, B, and C, and this is what happened...”
That was five minutes of thinking on the subject. I’m sure I could probably come up with more.
Ignoring the concerns basically means not participating in any of the AI x-risk threads. I don’t think it would be held against anyone to simply stay out.
Exactly this. It takes a lot of effort to become competent through an unconventional route, and it takes a lot of effort to separate the unqualified competent person from the crank.
You agree that it is the case, as I previously said, that what you are looking for is not generic smartness, but some domain specific thing that substitutes for conventional domain specific knowledge.
Researching a disease that you happen to have is one of them, but is clearly not the same thing as all conquering generic smartness ..such an individual has nothing like the breadth of knowledge an MD has, even if they have more depth in one precise area.
Why the extreme downvotes here? This seems like a good point, at least generally speaking, even if you disagree with what the exact subset should be. Upvoted.
The relevant subset of people who are smarter than you is the people who have relevant industry experience or academic qualifications.
I think that it’s possible for people without relevant industry experience or academic qualifications to say correct things about AGI risk, and I think it’s possible for people with relevant industry experience or academic qualifications to say stupid things about AGI risk.
For one thing, the latter has to be true, because there are people with relevant industry experience or academic qualifications who vehemently disagree about AGI risk with other people with relevant industry experience or academic qualifications. For example, if Yann LeCun is right about AGI risk then Stuart Russell is utterly dead wrong about AGI risk and vice-versa. Yet both of them have impeccable credentials. So it’s a foregone conclusion that you can have impeccable credentials yet say things that are dead wrong.
For another thing, AGI does not exist today, and therefore it’s far from clear that anyone on earth has “relevant” industry experience. Likewise, I’m pretty confident that you can spend 6 years getting a PhD in AI or ML without hearing literally a single word or thinking a single thought about AGI risk, or indeed AGI in general. You’re welcome to claim that the things everyone learns in CS grad school (e.g. knowledge about the multi-armed bandit problem, operating systems design, etc.) are helpful for evaluating whether the instrumental convergence hypothesis is true or false. But you need to make that argument—it’s not obvious, and I happen to think it’s mostly not true. Even if it were true, it’s obviously possible for someone to know everything in the CS grad school curriculum without winding up with a PhD, and if they do, why then wouldn’t we listen to what they have to say?
For another thing, I think that smart careful outsiders with good epistemics and willingness to invest time etc. are very far from helpless in evaluating technical questions in someone else’s field of expertise. For example, I think Zvi has acquitted himself well in his weekly analysis of COVID, despite being neither an epidemiologist nor a doctor. He was consistently saying things that became common knowledge only weeks or months later. More generally, the CDC and WHO are full of people with impeccable credentials, and lesswrong is full of people without medical or public health credentials, but I feel confident saying that lesswrong users have been saying more accurate things about COVID than the CDC or WHO have, throughout the pandemic. (Examples: the fact that handwashing is not very helpful for COVID prevention, but ventilation and masks are very helpful—these were common knowledge on lesswrong loooong before the CDC came around.) As another example, I recall hearing evidence that superforecasters can make forecasts that are about as accurate as domain experts on the topic of that forecast.
Anyway, the quote above seems is giving me the vibe that if someone (e.g. Eliezer Yudkowsky) has neither an AI PhD nor industry experience, then he’s automatically wrong and stupid, and we don’t need to waste our time listening to what he has to say and evaluating his arguments. I strongly disagree with that vibe, and suspect that the downvotes came from people feeling similarly. If that vibe is not what was intended, then maybe you or TAG can rephrase.
I get your view (thanks for your reply!), and tend to agree now. Even though I didn’t necessarily agree with TAG’s subset proposal, I didn’t see why the comment in question should receive so many downvotes—but
Anyway, the quote above seems is giving me the vibe that if someone (e.g. Eliezer Yudkowsky) has neither an AI PhD nor industry experience, then he’s automatically wrong and stupid, and we don’t need to waste our time listening to what he has to say and evaluating his arguments. I strongly disagree with that vibe, and suspect that the downvotes came from people feeling similarly. If that vibe is not what was intended, then maybe you or TAG can rephrase.
I think that it’s possible for people without relevant industry experience or academic qualifications to say correct things about AGI risk,
Of course it’s possible. It’s just not likely.
and I think it’s possible for people with relevant industry experience or academic qualifications to say stupid things about AGI risk.
Of course thats possible. The point is the probabilities, not the possibilities.
there are people with relevant industry experience or academic qualifications who vehemently disagree
And people without
relevant industry experience also disagree.
If the expert disagree, that not evidence that the non experts agree...or know what they are talking about.
For another thing, AGI does not exist today, and therefore it’s far from clear that anyone on earth has “relevant” industry experience
No one does if there is a huge leap from AI to AGI. “No one” would include Yudkowsky. Also,if there is a
huge leap from AI to AGI, then we are not in trouble soon.
Anyway, the quote above seems is giving me the vibe that if someone (e.g. Eliezer Yudkowsky) has neither an AI PhD nor industry experience, then he’s automatically wrong and stupid,
No just probably. But you already believe that, in the general case...you don’t believe that some unqualified and inexperienced person should take over your health, financial or legal affairs. I’m not telling you anything you don’t know already.
I feel like everything you’re saying is attacking the problem of
“How do you read somebody’s CV and decide whether or not to trust them?”
This problem is a hard problem, and I agree that if that’s the problem we face, there’s no good solution, and maybe checking their credentials is one of the least bad of the many bad options.
But that’s not the problem we face! There’s another path! We can decide who to trust by listening to the content of what they’re saying, and trying to figure out if it’s correct. Right??
There’s another path! We can decide who to trust by listening to the content of what they’re saying, and trying to figure out if it’s correct. Right??
Right. Please start doing so.
Please start noticing that much of EYs older work doesn’t even make a clear point. (what actually is his theory of consciousness? What actually is his theory of ethics?) Please start noticing that Yodakowsky’s newer work consists of hints at secret wisdom he can’t divulge. Please start noticing the objections to EYs postings that can be found in the comments to the sequences. Please understand that you can’t judge how correct someone is by ignoring or vilifying their critics—criticism from others, and how they deal with it, is the single most valuable resource in evaluating someone’s epistemological validity Please understand that you can’t understand someone by reading them in isolation. Please read something other than the sequences. Please stop copying ingroup opinions as a substitute for thinking. Please stop hammering the downvote button as a substitute for thinking.
I was arguing against this comment that you wrote above. Neither your comment nor anything in my replies was about Eliezer in particular, except that I brought him up as an example of someone who happens to lack a PhD and industry experience (IIRC).
It sounds like you read some of Eliezer’s writing, and tried to figure out if his claims were right or wrong or incoherent. Great! That’s the right thing to do.
But it makes me rather confused how you could have written that comment above.
Suppose you had originally said “I disagree that Eliezer is smarter than you, because whenever his writing overlaps with my areas of expertise, I find that he’s wrong or incoherent. Therefore you should be cautious in putting blind faith in his claims about AGI.” I would think that that’s a pretty reasonable thing to say. I mean, it happens not to match my own assessment (I mean the first part of the quote; of course I agree about the “blind faith” part of the quote), but it’s a valuable contribution to the conversation, and I certainly wouldn’t have downvoted if you had said that.
But that’s not what you said in your comment above. You said “The relevant subset of people who are smarter than you is the people who have relevant industry experience or academic qualifications.” That seems to be a very general statement about best practices to figure out what’s true and false, and I vehemently disagree with it, if I’m understanding it right. And maybe I don’t understand it right! After all, it seems that you yourself don’t behave that way.
Figuring out whether someone has good epistemology, from first principles, is much harder than looking at obvious data like qualifications and experience. Not many people have the time to do it in a few select cases, and no one had the ability to do it in every case. For practical purposes, you need to go by qualifications and experience most of the time, and you do .
How correlated are qualifications and good epistemology? Some qualifications are correlated enough that it’s reasonable to trust them. As you point out, if a doctor says I have strep throat, I trust that I have strep, and I trust the doctor’s recommendations on how to cure it. Typically, someone with an M.D. knows enough about such matters to tell me honestly and accurately what’s going on. But if a doctor starts trying to push Ivermectin/Moderna*, I know that could easily be the result of politics, rather than sensible medical judgement, and having an M.D. hardly immunizes one against political mind-killing.
I am not objecting, and I doubt anyone who downvoted you was objecting, to the practice of recognizing that some qualifications correlate strongly with certain types of expertise, and trusting accordingly. However, it is an empirical fact that many scientific claims from highly credentialed scientists did not replicate. In some fields, this was a majority of their supposed contributions. It is a simple fact that the world is teeming with credentials that don’t, actually, provide evidence that their bearer knows anything at all. In such cases, looking to a meaningless resume because it’s easier than checking their actual understanding is the Streetlight Fallacy. It is also worth noting that expertise tends to be quite narrow, and a person can be genuinely excellent in one area and clueless in another. My favorite example of this is Dr. Hayflick, discoverer of the Hayflick Limit, attempting to argue that anti-aging is incoherent. Dr. Hayflick is one of the finest biologists in the world, and his discovery was truly brilliant. Yet his arguments against anti-aging were utterly riddled with logical fallacies. Or Dr. Aumann, who is both a world-class game theorist and an Orthodox Jew.
If we trust academic qualifications without considering how anchored a field or institution is to reality, we risk ruling in both charlatans and genuinely capable people outside the area where they are capable. And if we only trust those credentials, we rule out anyone else who has actually learned about the subject.
*not to say that either of these is necessarily bad, just that tribal politics will tempt Red and Blue doctors respectively to push them regardless of whether or not they make sense.
It is also worth noting that expertise tends to be quite narrow, and a person can be genuinely excellent in one area and clueless in another
What are the chances the first AGI created suffers a similar issue, allowing us to defeat it by exploiting that weakness? I predict if we experience one obvious, high-profile, and terrifying near-miss with a potentially x-class AGI, governance of compute becomes trivial after that, and we’ll be safe for a while.
If you had said “If you don’t have the time and skills and motivation to figure out what’s true, then a good rule-of-thumb is to defer to people who have relevant industry experience or academic qualifications,” then I would have happily agreed. But that’s not what you said. Or at least, that’s not how I read your original comment.
Tetlock’s work does suggest that superforcasters can outperform people with domain expertise. The ability to synthesize existing information to make predictions about the future is not something that domain experts necessarily have in a way that makes them better than people who are skilled at forcasting.
The relevant subset of people who are smarter than you is the people who have relevant industry experience or academic qualifications.
There is no form of smartness that makes you equally good at everything.
Given the replication crisis, blind deference to academic qualifications is absurd. While there are certainly many smart PhDs, a piece of paper from a university does not automatically confer either intelligence or understanding.
That doesn’t mean there’s anything better. You probably take your medical problems to a doctor, not an unqualified smart person
...are you new here?
LW users will use doctors but are also quite likely to go to uncredentialed smart people for advice. Posts on DIY covid vaccines were extremely well received. I know two community members who had cancer, both of which commissioned private research and feel it led to better outcomes for them (treatment was still done by doctors, but this informed who they saw and what they chose). The covid tag is full of people giving advice that was later vindicated by public health.
LessWrong has thought about this trade-off and definitively come down on the side of “let uncredentialed smart people take a shot”, knowing that those people face a lot of obstacles to doing good work.
Which would be a refutation of my comment if I had said “definitely” instead of “probably”.
The issue is primarily one of signalling. For example, the ratio of medically qualified/unqualified doctors is vastly higher than the ratio of medically qualified/unqualified car owners in Turkey or whatever. Having a PHD is one of the best quick signals of qualification around, but if you happen to know an individual who isn’t a doctor but who has spent years of their life studying some obscure disease (perhaps after being a patient, or they’re autistic and it’s just their Special Interest or whatever), I’m going to value their thoughts on the topic quite highly as well, perhaps even higher than a random doctor whose quality I have not yet had a chance to ascertain.
Exactly this. Also, doctors are supposed to actually heal patients, and get some degree of real world feedback in succeeding or failing to do so. That likely puts them above most academics, who’s feedback is often purely in being published or not, cited or not, by other academics in a circlejerk divorced from reality.
That description could apply to a certain rationality website.
Certainly it could, and at times does. In our defense, however, we do not make our living this way. It’s all too easy for people to push karma around in a circle divorced from reality, but plenty of people feel free to criticize Less Wrong here, as you just neatly demonstrated. There’s a much stronger incentive to follow the party line in academia where dissent, however true or useful, can curtail promotion or even get one fired.
If we were making our living off of karma, your comparison would be entirely apt, and I’d expect to see the quality of discussion drop sharply.
Everything you say is true, and I agree. But lets not discount the pull towards social conformity that karma has, and the effect evaporative cooling of social groups has in terms of radicalizing community norms. You definitely get a lot further here by defending and promoting AI x-risk concerns than by dismissing or ignoring them.
That does tend to happen, yes, which is unfortunate. What would you suggest doing to reduce this tendency? (It’s totally fine if you don’t have a concrete solution of course, these sorts of problems are notoriously hard)
Karma should not be visible to anyone but mods, to whom it serves as a distributed mechanism for catching their attention and not much else. Large threads could use karma to decide which posts to initially display, but for smaller threads comments should be chronological.
People should be encouraged to post anonymously, as I am doing. Unfortunately the LW forum software devs are reverting this capability, which is a step backwards.
Get rid of featured articles and sequences. I mean keep the posts, but don’t feature them prominently on the top of the site. Have an infobar on the side maybe that can be a jumping off point for people to explore curated content, but don’t elevate it to the level of dogma as the current site does.
Encourage rigorous experimentation to verify one’s belief. A position arrived at through clever argumentation is quite possibly worthless. This is a particular vulnerability of this site, which is built around the exchange of words not physical evidence. So a culture needs to be developed which demands empirical investigation of the form “I wondered if X is true, so I did A, B, and C, and this is what happened...”
That was five minutes of thinking on the subject. I’m sure I could probably come up with more.
Ignoring the concerns basically means not participating in any of the AI x-risk threads. I don’t think it would be held against anyone to simply stay out.
https://www.lesswrong.com/posts/X3p8mxE5dHYDZNxCm/a-concrete-bet-offer-to-those-with-short-ai-timelines would be a post arguing against AI x-risk concerns and it has more than three times the karma then any other post published the day it was published.
Well, we were getting paid for karma the other week, so…. (This is mostly a joke; I get that was an April Fool’s thing 🙃)
Exactly this. It takes a lot of effort to become competent through an unconventional route, and it takes a lot of effort to separate the unqualified competent person from the crank.
You agree that it is the case, as I previously said, that what you are looking for is not generic smartness, but some domain specific thing that substitutes for conventional domain specific knowledge.
Researching a disease that you happen to have is one of them, but is clearly not the same thing as all conquering generic smartness ..such an individual has nothing like the breadth of knowledge an MD has, even if they have more depth in one precise area.
Why the extreme downvotes here? This seems like a good point, at least generally speaking, even if you disagree with what the exact subset should be. Upvoted.
Here’s the quote again:
I think that it’s possible for people without relevant industry experience or academic qualifications to say correct things about AGI risk, and I think it’s possible for people with relevant industry experience or academic qualifications to say stupid things about AGI risk.
For one thing, the latter has to be true, because there are people with relevant industry experience or academic qualifications who vehemently disagree about AGI risk with other people with relevant industry experience or academic qualifications. For example, if Yann LeCun is right about AGI risk then Stuart Russell is utterly dead wrong about AGI risk and vice-versa. Yet both of them have impeccable credentials. So it’s a foregone conclusion that you can have impeccable credentials yet say things that are dead wrong.
For another thing, AGI does not exist today, and therefore it’s far from clear that anyone on earth has “relevant” industry experience. Likewise, I’m pretty confident that you can spend 6 years getting a PhD in AI or ML without hearing literally a single word or thinking a single thought about AGI risk, or indeed AGI in general. You’re welcome to claim that the things everyone learns in CS grad school (e.g. knowledge about the multi-armed bandit problem, operating systems design, etc.) are helpful for evaluating whether the instrumental convergence hypothesis is true or false. But you need to make that argument—it’s not obvious, and I happen to think it’s mostly not true. Even if it were true, it’s obviously possible for someone to know everything in the CS grad school curriculum without winding up with a PhD, and if they do, why then wouldn’t we listen to what they have to say?
For another thing, I think that smart careful outsiders with good epistemics and willingness to invest time etc. are very far from helpless in evaluating technical questions in someone else’s field of expertise. For example, I think Zvi has acquitted himself well in his weekly analysis of COVID, despite being neither an epidemiologist nor a doctor. He was consistently saying things that became common knowledge only weeks or months later. More generally, the CDC and WHO are full of people with impeccable credentials, and lesswrong is full of people without medical or public health credentials, but I feel confident saying that lesswrong users have been saying more accurate things about COVID than the CDC or WHO have, throughout the pandemic. (Examples: the fact that handwashing is not very helpful for COVID prevention, but ventilation and masks are very helpful—these were common knowledge on lesswrong loooong before the CDC came around.) As another example, I recall hearing evidence that superforecasters can make forecasts that are about as accurate as domain experts on the topic of that forecast.
Anyway, the quote above seems is giving me the vibe that if someone (e.g. Eliezer Yudkowsky) has neither an AI PhD nor industry experience, then he’s automatically wrong and stupid, and we don’t need to waste our time listening to what he has to say and evaluating his arguments. I strongly disagree with that vibe, and suspect that the downvotes came from people feeling similarly. If that vibe is not what was intended, then maybe you or TAG can rephrase.
I get your view (thanks for your reply!), and tend to agree now. Even though I didn’t necessarily agree with TAG’s subset proposal, I didn’t see why the comment in question should receive so many downvotes—but
makes sense, thanks!
Of course it’s possible. It’s just not likely.
Of course thats possible. The point is the probabilities, not the possibilities.
And people without relevant industry experience also disagree.
If the expert disagree, that not evidence that the non experts agree...or know what they are talking about.
No one does if there is a huge leap from AI to AGI. “No one” would include Yudkowsky. Also,if there is a huge leap from AI to AGI, then we are not in trouble soon.
No just probably. But you already believe that, in the general case...you don’t believe that some unqualified and inexperienced person should take over your health, financial or legal affairs. I’m not telling you anything you don’t know already.
I feel like everything you’re saying is attacking the problem of
“How do you read somebody’s CV and decide whether or not to trust them?”
This problem is a hard problem, and I agree that if that’s the problem we face, there’s no good solution, and maybe checking their credentials is one of the least bad of the many bad options.
But that’s not the problem we face! There’s another path! We can decide who to trust by listening to the content of what they’re saying, and trying to figure out if it’s correct. Right??
Right. Please start doing so.
Please start noticing that much of EYs older work doesn’t even make a clear point. (what actually is his theory of consciousness? What actually is his theory of ethics?) Please start noticing that Yodakowsky’s newer work consists of hints at secret wisdom he can’t divulge. Please start noticing the objections to EYs postings that can be found in the comments to the sequences. Please understand that you can’t judge how correct someone is by ignoring or vilifying their critics—criticism from others, and how they deal with it, is the single most valuable resource in evaluating someone’s epistemological validity Please understand that you can’t understand someone by reading them in isolation. Please read something other than the sequences. Please stop copying ingroup opinions as a substitute for thinking. Please stop hammering the downvote button as a substitute for thinking.
I was arguing against this comment that you wrote above. Neither your comment nor anything in my replies was about Eliezer in particular, except that I brought him up as an example of someone who happens to lack a PhD and industry experience (IIRC).
It sounds like you read some of Eliezer’s writing, and tried to figure out if his claims were right or wrong or incoherent. Great! That’s the right thing to do.
But it makes me rather confused how you could have written that comment above.
Suppose you had originally said “I disagree that Eliezer is smarter than you, because whenever his writing overlaps with my areas of expertise, I find that he’s wrong or incoherent. Therefore you should be cautious in putting blind faith in his claims about AGI.” I would think that that’s a pretty reasonable thing to say. I mean, it happens not to match my own assessment (I mean the first part of the quote; of course I agree about the “blind faith” part of the quote), but it’s a valuable contribution to the conversation, and I certainly wouldn’t have downvoted if you had said that.
But that’s not what you said in your comment above. You said “The relevant subset of people who are smarter than you is the people who have relevant industry experience or academic qualifications.” That seems to be a very general statement about best practices to figure out what’s true and false, and I vehemently disagree with it, if I’m understanding it right. And maybe I don’t understand it right! After all, it seems that you yourself don’t behave that way.
Figuring out whether someone has good epistemology, from first principles, is much harder than looking at obvious data like qualifications and experience. Not many people have the time to do it in a few select cases, and no one had the ability to do it in every case. For practical purposes, you need to go by qualifications and experience most of the time, and you do .
How correlated are qualifications and good epistemology? Some qualifications are correlated enough that it’s reasonable to trust them. As you point out, if a doctor says I have strep throat, I trust that I have strep, and I trust the doctor’s recommendations on how to cure it. Typically, someone with an M.D. knows enough about such matters to tell me honestly and accurately what’s going on. But if a doctor starts trying to push Ivermectin/Moderna*, I know that could easily be the result of politics, rather than sensible medical judgement, and having an M.D. hardly immunizes one against political mind-killing.
I am not objecting, and I doubt anyone who downvoted you was objecting, to the practice of recognizing that some qualifications correlate strongly with certain types of expertise, and trusting accordingly. However, it is an empirical fact that many scientific claims from highly credentialed scientists did not replicate. In some fields, this was a majority of their supposed contributions. It is a simple fact that the world is teeming with credentials that don’t, actually, provide evidence that their bearer knows anything at all. In such cases, looking to a meaningless resume because it’s easier than checking their actual understanding is the Streetlight Fallacy. It is also worth noting that expertise tends to be quite narrow, and a person can be genuinely excellent in one area and clueless in another. My favorite example of this is Dr. Hayflick, discoverer of the Hayflick Limit, attempting to argue that anti-aging is incoherent. Dr. Hayflick is one of the finest biologists in the world, and his discovery was truly brilliant. Yet his arguments against anti-aging were utterly riddled with logical fallacies. Or Dr. Aumann, who is both a world-class game theorist and an Orthodox Jew.
If we trust academic qualifications without considering how anchored a field or institution is to reality, we risk ruling in both charlatans and genuinely capable people outside the area where they are capable. And if we only trust those credentials, we rule out anyone else who has actually learned about the subject.
*not to say that either of these is necessarily bad, just that tribal politics will tempt Red and Blue doctors respectively to push them regardless of whether or not they make sense.
What are the chances the first AGI created suffers a similar issue, allowing us to defeat it by exploiting that weakness? I predict if we experience one obvious, high-profile, and terrifying near-miss with a potentially x-class AGI, governance of compute becomes trivial after that, and we’ll be safe for a while.
The first AGI? Very high. The first superintelligence? Not so much.
Sure. But that’s not what you said in that comment that we’re talking about.
If you had said “If you don’t have the time and skills and motivation to figure out what’s true, then a good rule-of-thumb is to defer to people who have relevant industry experience or academic qualifications,” then I would have happily agreed. But that’s not what you said. Or at least, that’s not how I read your original comment.
Then you should probably (no pun intended) have mentioned that. Your original comment had quite a certain vibe.
Tetlock’s work does suggest that superforcasters can outperform people with domain expertise. The ability to synthesize existing information to make predictions about the future is not something that domain experts necessarily have in a way that makes them better than people who are skilled at forcasting.