Why the extreme downvotes here? This seems like a good point, at least generally speaking, even if you disagree with what the exact subset should be. Upvoted.
The relevant subset of people who are smarter than you is the people who have relevant industry experience or academic qualifications.
I think that it’s possible for people without relevant industry experience or academic qualifications to say correct things about AGI risk, and I think it’s possible for people with relevant industry experience or academic qualifications to say stupid things about AGI risk.
For one thing, the latter has to be true, because there are people with relevant industry experience or academic qualifications who vehemently disagree about AGI risk with other people with relevant industry experience or academic qualifications. For example, if Yann LeCun is right about AGI risk then Stuart Russell is utterly dead wrong about AGI risk and vice-versa. Yet both of them have impeccable credentials. So it’s a foregone conclusion that you can have impeccable credentials yet say things that are dead wrong.
For another thing, AGI does not exist today, and therefore it’s far from clear that anyone on earth has “relevant” industry experience. Likewise, I’m pretty confident that you can spend 6 years getting a PhD in AI or ML without hearing literally a single word or thinking a single thought about AGI risk, or indeed AGI in general. You’re welcome to claim that the things everyone learns in CS grad school (e.g. knowledge about the multi-armed bandit problem, operating systems design, etc.) are helpful for evaluating whether the instrumental convergence hypothesis is true or false. But you need to make that argument—it’s not obvious, and I happen to think it’s mostly not true. Even if it were true, it’s obviously possible for someone to know everything in the CS grad school curriculum without winding up with a PhD, and if they do, why then wouldn’t we listen to what they have to say?
For another thing, I think that smart careful outsiders with good epistemics and willingness to invest time etc. are very far from helpless in evaluating technical questions in someone else’s field of expertise. For example, I think Zvi has acquitted himself well in his weekly analysis of COVID, despite being neither an epidemiologist nor a doctor. He was consistently saying things that became common knowledge only weeks or months later. More generally, the CDC and WHO are full of people with impeccable credentials, and lesswrong is full of people without medical or public health credentials, but I feel confident saying that lesswrong users have been saying more accurate things about COVID than the CDC or WHO have, throughout the pandemic. (Examples: the fact that handwashing is not very helpful for COVID prevention, but ventilation and masks are very helpful—these were common knowledge on lesswrong loooong before the CDC came around.) As another example, I recall hearing evidence that superforecasters can make forecasts that are about as accurate as domain experts on the topic of that forecast.
Anyway, the quote above seems is giving me the vibe that if someone (e.g. Eliezer Yudkowsky) has neither an AI PhD nor industry experience, then he’s automatically wrong and stupid, and we don’t need to waste our time listening to what he has to say and evaluating his arguments. I strongly disagree with that vibe, and suspect that the downvotes came from people feeling similarly. If that vibe is not what was intended, then maybe you or TAG can rephrase.
I get your view (thanks for your reply!), and tend to agree now. Even though I didn’t necessarily agree with TAG’s subset proposal, I didn’t see why the comment in question should receive so many downvotes—but
Anyway, the quote above seems is giving me the vibe that if someone (e.g. Eliezer Yudkowsky) has neither an AI PhD nor industry experience, then he’s automatically wrong and stupid, and we don’t need to waste our time listening to what he has to say and evaluating his arguments. I strongly disagree with that vibe, and suspect that the downvotes came from people feeling similarly. If that vibe is not what was intended, then maybe you or TAG can rephrase.
I think that it’s possible for people without relevant industry experience or academic qualifications to say correct things about AGI risk,
Of course it’s possible. It’s just not likely.
and I think it’s possible for people with relevant industry experience or academic qualifications to say stupid things about AGI risk.
Of course thats possible. The point is the probabilities, not the possibilities.
there are people with relevant industry experience or academic qualifications who vehemently disagree
And people without
relevant industry experience also disagree.
If the expert disagree, that not evidence that the non experts agree...or know what they are talking about.
For another thing, AGI does not exist today, and therefore it’s far from clear that anyone on earth has “relevant” industry experience
No one does if there is a huge leap from AI to AGI. “No one” would include Yudkowsky. Also,if there is a
huge leap from AI to AGI, then we are not in trouble soon.
Anyway, the quote above seems is giving me the vibe that if someone (e.g. Eliezer Yudkowsky) has neither an AI PhD nor industry experience, then he’s automatically wrong and stupid,
No just probably. But you already believe that, in the general case...you don’t believe that some unqualified and inexperienced person should take over your health, financial or legal affairs. I’m not telling you anything you don’t know already.
I feel like everything you’re saying is attacking the problem of
“How do you read somebody’s CV and decide whether or not to trust them?”
This problem is a hard problem, and I agree that if that’s the problem we face, there’s no good solution, and maybe checking their credentials is one of the least bad of the many bad options.
But that’s not the problem we face! There’s another path! We can decide who to trust by listening to the content of what they’re saying, and trying to figure out if it’s correct. Right??
There’s another path! We can decide who to trust by listening to the content of what they’re saying, and trying to figure out if it’s correct. Right??
Right. Please start doing so.
Please start noticing that much of EYs older work doesn’t even make a clear point. (what actually is his theory of consciousness? What actually is his theory of ethics?) Please start noticing that Yodakowsky’s newer work consists of hints at secret wisdom he can’t divulge. Please start noticing the objections to EYs postings that can be found in the comments to the sequences. Please understand that you can’t judge how correct someone is by ignoring or vilifying their critics—criticism from others, and how they deal with it, is the single most valuable resource in evaluating someone’s epistemological validity Please understand that you can’t understand someone by reading them in isolation. Please read something other than the sequences. Please stop copying ingroup opinions as a substitute for thinking. Please stop hammering the downvote button as a substitute for thinking.
I was arguing against this comment that you wrote above. Neither your comment nor anything in my replies was about Eliezer in particular, except that I brought him up as an example of someone who happens to lack a PhD and industry experience (IIRC).
It sounds like you read some of Eliezer’s writing, and tried to figure out if his claims were right or wrong or incoherent. Great! That’s the right thing to do.
But it makes me rather confused how you could have written that comment above.
Suppose you had originally said “I disagree that Eliezer is smarter than you, because whenever his writing overlaps with my areas of expertise, I find that he’s wrong or incoherent. Therefore you should be cautious in putting blind faith in his claims about AGI.” I would think that that’s a pretty reasonable thing to say. I mean, it happens not to match my own assessment (I mean the first part of the quote; of course I agree about the “blind faith” part of the quote), but it’s a valuable contribution to the conversation, and I certainly wouldn’t have downvoted if you had said that.
But that’s not what you said in your comment above. You said “The relevant subset of people who are smarter than you is the people who have relevant industry experience or academic qualifications.” That seems to be a very general statement about best practices to figure out what’s true and false, and I vehemently disagree with it, if I’m understanding it right. And maybe I don’t understand it right! After all, it seems that you yourself don’t behave that way.
Figuring out whether someone has good epistemology, from first principles, is much harder than looking at obvious data like qualifications and experience. Not many people have the time to do it in a few select cases, and no one had the ability to do it in every case. For practical purposes, you need to go by qualifications and experience most of the time, and you do .
How correlated are qualifications and good epistemology? Some qualifications are correlated enough that it’s reasonable to trust them. As you point out, if a doctor says I have strep throat, I trust that I have strep, and I trust the doctor’s recommendations on how to cure it. Typically, someone with an M.D. knows enough about such matters to tell me honestly and accurately what’s going on. But if a doctor starts trying to push Ivermectin/Moderna*, I know that could easily be the result of politics, rather than sensible medical judgement, and having an M.D. hardly immunizes one against political mind-killing.
I am not objecting, and I doubt anyone who downvoted you was objecting, to the practice of recognizing that some qualifications correlate strongly with certain types of expertise, and trusting accordingly. However, it is an empirical fact that many scientific claims from highly credentialed scientists did not replicate. In some fields, this was a majority of their supposed contributions. It is a simple fact that the world is teeming with credentials that don’t, actually, provide evidence that their bearer knows anything at all. In such cases, looking to a meaningless resume because it’s easier than checking their actual understanding is the Streetlight Fallacy. It is also worth noting that expertise tends to be quite narrow, and a person can be genuinely excellent in one area and clueless in another. My favorite example of this is Dr. Hayflick, discoverer of the Hayflick Limit, attempting to argue that anti-aging is incoherent. Dr. Hayflick is one of the finest biologists in the world, and his discovery was truly brilliant. Yet his arguments against anti-aging were utterly riddled with logical fallacies. Or Dr. Aumann, who is both a world-class game theorist and an Orthodox Jew.
If we trust academic qualifications without considering how anchored a field or institution is to reality, we risk ruling in both charlatans and genuinely capable people outside the area where they are capable. And if we only trust those credentials, we rule out anyone else who has actually learned about the subject.
*not to say that either of these is necessarily bad, just that tribal politics will tempt Red and Blue doctors respectively to push them regardless of whether or not they make sense.
It is also worth noting that expertise tends to be quite narrow, and a person can be genuinely excellent in one area and clueless in another
What are the chances the first AGI created suffers a similar issue, allowing us to defeat it by exploiting that weakness? I predict if we experience one obvious, high-profile, and terrifying near-miss with a potentially x-class AGI, governance of compute becomes trivial after that, and we’ll be safe for a while.
If you had said “If you don’t have the time and skills and motivation to figure out what’s true, then a good rule-of-thumb is to defer to people who have relevant industry experience or academic qualifications,” then I would have happily agreed. But that’s not what you said. Or at least, that’s not how I read your original comment.
Tetlock’s work does suggest that superforcasters can outperform people with domain expertise. The ability to synthesize existing information to make predictions about the future is not something that domain experts necessarily have in a way that makes them better than people who are skilled at forcasting.
Why the extreme downvotes here? This seems like a good point, at least generally speaking, even if you disagree with what the exact subset should be. Upvoted.
Here’s the quote again:
I think that it’s possible for people without relevant industry experience or academic qualifications to say correct things about AGI risk, and I think it’s possible for people with relevant industry experience or academic qualifications to say stupid things about AGI risk.
For one thing, the latter has to be true, because there are people with relevant industry experience or academic qualifications who vehemently disagree about AGI risk with other people with relevant industry experience or academic qualifications. For example, if Yann LeCun is right about AGI risk then Stuart Russell is utterly dead wrong about AGI risk and vice-versa. Yet both of them have impeccable credentials. So it’s a foregone conclusion that you can have impeccable credentials yet say things that are dead wrong.
For another thing, AGI does not exist today, and therefore it’s far from clear that anyone on earth has “relevant” industry experience. Likewise, I’m pretty confident that you can spend 6 years getting a PhD in AI or ML without hearing literally a single word or thinking a single thought about AGI risk, or indeed AGI in general. You’re welcome to claim that the things everyone learns in CS grad school (e.g. knowledge about the multi-armed bandit problem, operating systems design, etc.) are helpful for evaluating whether the instrumental convergence hypothesis is true or false. But you need to make that argument—it’s not obvious, and I happen to think it’s mostly not true. Even if it were true, it’s obviously possible for someone to know everything in the CS grad school curriculum without winding up with a PhD, and if they do, why then wouldn’t we listen to what they have to say?
For another thing, I think that smart careful outsiders with good epistemics and willingness to invest time etc. are very far from helpless in evaluating technical questions in someone else’s field of expertise. For example, I think Zvi has acquitted himself well in his weekly analysis of COVID, despite being neither an epidemiologist nor a doctor. He was consistently saying things that became common knowledge only weeks or months later. More generally, the CDC and WHO are full of people with impeccable credentials, and lesswrong is full of people without medical or public health credentials, but I feel confident saying that lesswrong users have been saying more accurate things about COVID than the CDC or WHO have, throughout the pandemic. (Examples: the fact that handwashing is not very helpful for COVID prevention, but ventilation and masks are very helpful—these were common knowledge on lesswrong loooong before the CDC came around.) As another example, I recall hearing evidence that superforecasters can make forecasts that are about as accurate as domain experts on the topic of that forecast.
Anyway, the quote above seems is giving me the vibe that if someone (e.g. Eliezer Yudkowsky) has neither an AI PhD nor industry experience, then he’s automatically wrong and stupid, and we don’t need to waste our time listening to what he has to say and evaluating his arguments. I strongly disagree with that vibe, and suspect that the downvotes came from people feeling similarly. If that vibe is not what was intended, then maybe you or TAG can rephrase.
I get your view (thanks for your reply!), and tend to agree now. Even though I didn’t necessarily agree with TAG’s subset proposal, I didn’t see why the comment in question should receive so many downvotes—but
makes sense, thanks!
Of course it’s possible. It’s just not likely.
Of course thats possible. The point is the probabilities, not the possibilities.
And people without relevant industry experience also disagree.
If the expert disagree, that not evidence that the non experts agree...or know what they are talking about.
No one does if there is a huge leap from AI to AGI. “No one” would include Yudkowsky. Also,if there is a huge leap from AI to AGI, then we are not in trouble soon.
No just probably. But you already believe that, in the general case...you don’t believe that some unqualified and inexperienced person should take over your health, financial or legal affairs. I’m not telling you anything you don’t know already.
I feel like everything you’re saying is attacking the problem of
“How do you read somebody’s CV and decide whether or not to trust them?”
This problem is a hard problem, and I agree that if that’s the problem we face, there’s no good solution, and maybe checking their credentials is one of the least bad of the many bad options.
But that’s not the problem we face! There’s another path! We can decide who to trust by listening to the content of what they’re saying, and trying to figure out if it’s correct. Right??
Right. Please start doing so.
Please start noticing that much of EYs older work doesn’t even make a clear point. (what actually is his theory of consciousness? What actually is his theory of ethics?) Please start noticing that Yodakowsky’s newer work consists of hints at secret wisdom he can’t divulge. Please start noticing the objections to EYs postings that can be found in the comments to the sequences. Please understand that you can’t judge how correct someone is by ignoring or vilifying their critics—criticism from others, and how they deal with it, is the single most valuable resource in evaluating someone’s epistemological validity Please understand that you can’t understand someone by reading them in isolation. Please read something other than the sequences. Please stop copying ingroup opinions as a substitute for thinking. Please stop hammering the downvote button as a substitute for thinking.
I was arguing against this comment that you wrote above. Neither your comment nor anything in my replies was about Eliezer in particular, except that I brought him up as an example of someone who happens to lack a PhD and industry experience (IIRC).
It sounds like you read some of Eliezer’s writing, and tried to figure out if his claims were right or wrong or incoherent. Great! That’s the right thing to do.
But it makes me rather confused how you could have written that comment above.
Suppose you had originally said “I disagree that Eliezer is smarter than you, because whenever his writing overlaps with my areas of expertise, I find that he’s wrong or incoherent. Therefore you should be cautious in putting blind faith in his claims about AGI.” I would think that that’s a pretty reasonable thing to say. I mean, it happens not to match my own assessment (I mean the first part of the quote; of course I agree about the “blind faith” part of the quote), but it’s a valuable contribution to the conversation, and I certainly wouldn’t have downvoted if you had said that.
But that’s not what you said in your comment above. You said “The relevant subset of people who are smarter than you is the people who have relevant industry experience or academic qualifications.” That seems to be a very general statement about best practices to figure out what’s true and false, and I vehemently disagree with it, if I’m understanding it right. And maybe I don’t understand it right! After all, it seems that you yourself don’t behave that way.
Figuring out whether someone has good epistemology, from first principles, is much harder than looking at obvious data like qualifications and experience. Not many people have the time to do it in a few select cases, and no one had the ability to do it in every case. For practical purposes, you need to go by qualifications and experience most of the time, and you do .
How correlated are qualifications and good epistemology? Some qualifications are correlated enough that it’s reasonable to trust them. As you point out, if a doctor says I have strep throat, I trust that I have strep, and I trust the doctor’s recommendations on how to cure it. Typically, someone with an M.D. knows enough about such matters to tell me honestly and accurately what’s going on. But if a doctor starts trying to push Ivermectin/Moderna*, I know that could easily be the result of politics, rather than sensible medical judgement, and having an M.D. hardly immunizes one against political mind-killing.
I am not objecting, and I doubt anyone who downvoted you was objecting, to the practice of recognizing that some qualifications correlate strongly with certain types of expertise, and trusting accordingly. However, it is an empirical fact that many scientific claims from highly credentialed scientists did not replicate. In some fields, this was a majority of their supposed contributions. It is a simple fact that the world is teeming with credentials that don’t, actually, provide evidence that their bearer knows anything at all. In such cases, looking to a meaningless resume because it’s easier than checking their actual understanding is the Streetlight Fallacy. It is also worth noting that expertise tends to be quite narrow, and a person can be genuinely excellent in one area and clueless in another. My favorite example of this is Dr. Hayflick, discoverer of the Hayflick Limit, attempting to argue that anti-aging is incoherent. Dr. Hayflick is one of the finest biologists in the world, and his discovery was truly brilliant. Yet his arguments against anti-aging were utterly riddled with logical fallacies. Or Dr. Aumann, who is both a world-class game theorist and an Orthodox Jew.
If we trust academic qualifications without considering how anchored a field or institution is to reality, we risk ruling in both charlatans and genuinely capable people outside the area where they are capable. And if we only trust those credentials, we rule out anyone else who has actually learned about the subject.
*not to say that either of these is necessarily bad, just that tribal politics will tempt Red and Blue doctors respectively to push them regardless of whether or not they make sense.
What are the chances the first AGI created suffers a similar issue, allowing us to defeat it by exploiting that weakness? I predict if we experience one obvious, high-profile, and terrifying near-miss with a potentially x-class AGI, governance of compute becomes trivial after that, and we’ll be safe for a while.
The first AGI? Very high. The first superintelligence? Not so much.
Sure. But that’s not what you said in that comment that we’re talking about.
If you had said “If you don’t have the time and skills and motivation to figure out what’s true, then a good rule-of-thumb is to defer to people who have relevant industry experience or academic qualifications,” then I would have happily agreed. But that’s not what you said. Or at least, that’s not how I read your original comment.
Then you should probably (no pun intended) have mentioned that. Your original comment had quite a certain vibe.
Tetlock’s work does suggest that superforcasters can outperform people with domain expertise. The ability to synthesize existing information to make predictions about the future is not something that domain experts necessarily have in a way that makes them better than people who are skilled at forcasting.