Figuring out whether someone has good epistemology, from first principles, is much harder than looking at obvious data like qualifications and experience. Not many people have the time to do it in a few select cases, and no one had the ability to do it in every case. For practical purposes, you need to go by qualifications and experience most of the time, and you do .
How correlated are qualifications and good epistemology? Some qualifications are correlated enough that it’s reasonable to trust them. As you point out, if a doctor says I have strep throat, I trust that I have strep, and I trust the doctor’s recommendations on how to cure it. Typically, someone with an M.D. knows enough about such matters to tell me honestly and accurately what’s going on. But if a doctor starts trying to push Ivermectin/Moderna*, I know that could easily be the result of politics, rather than sensible medical judgement, and having an M.D. hardly immunizes one against political mind-killing.
I am not objecting, and I doubt anyone who downvoted you was objecting, to the practice of recognizing that some qualifications correlate strongly with certain types of expertise, and trusting accordingly. However, it is an empirical fact that many scientific claims from highly credentialed scientists did not replicate. In some fields, this was a majority of their supposed contributions. It is a simple fact that the world is teeming with credentials that don’t, actually, provide evidence that their bearer knows anything at all. In such cases, looking to a meaningless resume because it’s easier than checking their actual understanding is the Streetlight Fallacy. It is also worth noting that expertise tends to be quite narrow, and a person can be genuinely excellent in one area and clueless in another. My favorite example of this is Dr. Hayflick, discoverer of the Hayflick Limit, attempting to argue that anti-aging is incoherent. Dr. Hayflick is one of the finest biologists in the world, and his discovery was truly brilliant. Yet his arguments against anti-aging were utterly riddled with logical fallacies. Or Dr. Aumann, who is both a world-class game theorist and an Orthodox Jew.
If we trust academic qualifications without considering how anchored a field or institution is to reality, we risk ruling in both charlatans and genuinely capable people outside the area where they are capable. And if we only trust those credentials, we rule out anyone else who has actually learned about the subject.
*not to say that either of these is necessarily bad, just that tribal politics will tempt Red and Blue doctors respectively to push them regardless of whether or not they make sense.
It is also worth noting that expertise tends to be quite narrow, and a person can be genuinely excellent in one area and clueless in another
What are the chances the first AGI created suffers a similar issue, allowing us to defeat it by exploiting that weakness? I predict if we experience one obvious, high-profile, and terrifying near-miss with a potentially x-class AGI, governance of compute becomes trivial after that, and we’ll be safe for a while.
If you had said “If you don’t have the time and skills and motivation to figure out what’s true, then a good rule-of-thumb is to defer to people who have relevant industry experience or academic qualifications,” then I would have happily agreed. But that’s not what you said. Or at least, that’s not how I read your original comment.
Figuring out whether someone has good epistemology, from first principles, is much harder than looking at obvious data like qualifications and experience. Not many people have the time to do it in a few select cases, and no one had the ability to do it in every case. For practical purposes, you need to go by qualifications and experience most of the time, and you do .
How correlated are qualifications and good epistemology? Some qualifications are correlated enough that it’s reasonable to trust them. As you point out, if a doctor says I have strep throat, I trust that I have strep, and I trust the doctor’s recommendations on how to cure it. Typically, someone with an M.D. knows enough about such matters to tell me honestly and accurately what’s going on. But if a doctor starts trying to push Ivermectin/Moderna*, I know that could easily be the result of politics, rather than sensible medical judgement, and having an M.D. hardly immunizes one against political mind-killing.
I am not objecting, and I doubt anyone who downvoted you was objecting, to the practice of recognizing that some qualifications correlate strongly with certain types of expertise, and trusting accordingly. However, it is an empirical fact that many scientific claims from highly credentialed scientists did not replicate. In some fields, this was a majority of their supposed contributions. It is a simple fact that the world is teeming with credentials that don’t, actually, provide evidence that their bearer knows anything at all. In such cases, looking to a meaningless resume because it’s easier than checking their actual understanding is the Streetlight Fallacy. It is also worth noting that expertise tends to be quite narrow, and a person can be genuinely excellent in one area and clueless in another. My favorite example of this is Dr. Hayflick, discoverer of the Hayflick Limit, attempting to argue that anti-aging is incoherent. Dr. Hayflick is one of the finest biologists in the world, and his discovery was truly brilliant. Yet his arguments against anti-aging were utterly riddled with logical fallacies. Or Dr. Aumann, who is both a world-class game theorist and an Orthodox Jew.
If we trust academic qualifications without considering how anchored a field or institution is to reality, we risk ruling in both charlatans and genuinely capable people outside the area where they are capable. And if we only trust those credentials, we rule out anyone else who has actually learned about the subject.
*not to say that either of these is necessarily bad, just that tribal politics will tempt Red and Blue doctors respectively to push them regardless of whether or not they make sense.
What are the chances the first AGI created suffers a similar issue, allowing us to defeat it by exploiting that weakness? I predict if we experience one obvious, high-profile, and terrifying near-miss with a potentially x-class AGI, governance of compute becomes trivial after that, and we’ll be safe for a while.
The first AGI? Very high. The first superintelligence? Not so much.
Sure. But that’s not what you said in that comment that we’re talking about.
If you had said “If you don’t have the time and skills and motivation to figure out what’s true, then a good rule-of-thumb is to defer to people who have relevant industry experience or academic qualifications,” then I would have happily agreed. But that’s not what you said. Or at least, that’s not how I read your original comment.