Yeah, credentials are a poor way of judging things. But that first paragraph doesn’t show remotely what you think it does.
Some of David Deutsch’s credentials that establish him as a credible authority on quantum mechanics: He is a physics professor at a leading university, a Fellow of the Royal Society, is widely recognized as a founder of the field of quantum computation, and has won some big-name prizes awarded to eminent scientists.
Your credentials as a credible authority on quantum mechanics: You assure us that you’ve talked a lot with David Deutsch and learned a lot from him about quantum mechanics.
This is not how credentials work. Leaving aside what useful information (if any) they impart: when it comes to quantum mechanics, David Deutsch has credentials and you don’t.
It’s not clear to me what argument you’re actually making in that first paragraph. But it seems to begin with the claim that you have good credentials when it comes to quantum mechanics for the reasons you recite there, and that’s flatly untrue.
Yeah, credentials are a poor way of judging things.
They are not, though. It’s standard “what LW calls ‘Bayes’ and what I call ‘reasoning under uncertainty’”—you condition on things associated with the outcome, since those things carry information. Outcome (O) -- having a clue, thing (C) -- credential. p(O | C) > p(O), so your credence in O should be computed after conditioning on C, on pain of irrationality. Specifically, the type of irrationality where you leave information on the table.
You might say “oh, I heard about how argument screens authority.” This is actually not true though, even by “LW Bayesian” lights, because you can never be certain you got the argument right (or the presumed authority got the argument right). It also assumes there are no other paths from C to O except through argument, which isn’t true.
It is a foundational thing you do when reasoning under uncertainty to condition on everything that carries information. The more informative the thing, the worse it is not to condition on it. This is not a novel crazy thing I am proposing, this is bog standard.
The way the treatment of credentialism seems to work in practice on LW is a reflexive rejection of “experts” writ large, except for an explicitly enumerated subset (perhaps ones EY or other “recognized community thought leaders” liked).
This is a part of community DNA, starting with EY’s stuff, and Luke’s “philosophy is a diseased discipline.”
Actually, I somewhat agree, but being an agreeable sort of chap I’m willing to concede things arguendo when there’s no compelling reason to do otherwise :-), which is why I said “Yeah, credentials are a poor way of judging things” rather than hedging more.
More precisely: I think credentials very much can give you useful information, and I agree with you that argument does not perfectly screen off authority. On the other hand, I agree with prevailing LW culture (perhaps with you too) that credentials typically give you very imperfect information and that argument does somewhat screen off authority. And I suggest that how much credentials tell you may vary a great deal by discipline and by type of credentials. Example: the Pope has, by definition, excellent credentials of a certain kind. But I don’t consider him an authority on whether any sort of gods exist because I think the process that gave him the credentials he has isn’t sufficiently responsive to that question. (On the other hand, that process is highly responsive to what Catholic doctrine is and I would consider the Pope a very good authority on that topic even if he didn’t have the ability for control that doctrine as well as reporting it.)
It seems to me that e.g. physics has norms that tie its credentials pretty well (though not perfectly) to actual understanding and knowledge; that philosophy doesn’t do this so well; that theology does it worse; that homeopathy does it worse still. (This isn’t just about the moral or cognitive excellence of the disciplines in question; it’s also that it’s harder to tell whether someone’s any good or not in some fields than in others.)
I guess the way I would slice disciplines is like this:
(a) Makes empirical claims (credences change with evidence, or falsifiable, or [however you want to define this]), or has universally agreed rules for telling good from bad (mathematics, theoretical parts of fields, etc.)
(b) Does not make empirical claims, and has no universally agreed rules for telling good from bad.
Some philosophy is in (a) and some in (b). Most statistics is in (a), for example.
Re: (a), most folks would need a lot of study to evaluate claims, typically at the graduate level. So the best thing to do is get the lay of the land by asking experts. Experts may disagree, of course, which is valuable information.
Yeah, credentials are a poor way of judging things. But that first paragraph doesn’t show remotely what you think it does.
Some of David Deutsch’s credentials that establish him as a credible authority on quantum mechanics: He is a physics professor at a leading university, a Fellow of the Royal Society, is widely recognized as a founder of the field of quantum computation, and has won some big-name prizes awarded to eminent scientists.
Your credentials as a credible authority on quantum mechanics: You assure us that you’ve talked a lot with David Deutsch and learned a lot from him about quantum mechanics.
This is not how credentials work. Leaving aside what useful information (if any) they impart: when it comes to quantum mechanics, David Deutsch has credentials and you don’t.
It’s not clear to me what argument you’re actually making in that first paragraph. But it seems to begin with the claim that you have good credentials when it comes to quantum mechanics for the reasons you recite there, and that’s flatly untrue.
They are not, though. It’s standard “what LW calls ‘Bayes’ and what I call ‘reasoning under uncertainty’”—you condition on things associated with the outcome, since those things carry information. Outcome (O) -- having a clue, thing (C) -- credential. p(O | C) > p(O), so your credence in O should be computed after conditioning on C, on pain of irrationality. Specifically, the type of irrationality where you leave information on the table.
You might say “oh, I heard about how argument screens authority.” This is actually not true though, even by “LW Bayesian” lights, because you can never be certain you got the argument right (or the presumed authority got the argument right). It also assumes there are no other paths from C to O except through argument, which isn’t true.
It is a foundational thing you do when reasoning under uncertainty to condition on everything that carries information. The more informative the thing, the worse it is not to condition on it. This is not a novel crazy thing I am proposing, this is bog standard.
The way the treatment of credentialism seems to work in practice on LW is a reflexive rejection of “experts” writ large, except for an explicitly enumerated subset (perhaps ones EY or other “recognized community thought leaders” liked).
This is a part of community DNA, starting with EY’s stuff, and Luke’s “philosophy is a diseased discipline.”
That is crazy.
Actually, I somewhat agree, but being an agreeable sort of chap I’m willing to concede things arguendo when there’s no compelling reason to do otherwise :-), which is why I said “Yeah, credentials are a poor way of judging things” rather than hedging more.
More precisely: I think credentials very much can give you useful information, and I agree with you that argument does not perfectly screen off authority. On the other hand, I agree with prevailing LW culture (perhaps with you too) that credentials typically give you very imperfect information and that argument does somewhat screen off authority. And I suggest that how much credentials tell you may vary a great deal by discipline and by type of credentials. Example: the Pope has, by definition, excellent credentials of a certain kind. But I don’t consider him an authority on whether any sort of gods exist because I think the process that gave him the credentials he has isn’t sufficiently responsive to that question. (On the other hand, that process is highly responsive to what Catholic doctrine is and I would consider the Pope a very good authority on that topic even if he didn’t have the ability for control that doctrine as well as reporting it.)
It seems to me that e.g. physics has norms that tie its credentials pretty well (though not perfectly) to actual understanding and knowledge; that philosophy doesn’t do this so well; that theology does it worse; that homeopathy does it worse still. (This isn’t just about the moral or cognitive excellence of the disciplines in question; it’s also that it’s harder to tell whether someone’s any good or not in some fields than in others.)
I guess the way I would slice disciplines is like this:
(a) Makes empirical claims (credences change with evidence, or falsifiable, or [however you want to define this]), or has universally agreed rules for telling good from bad (mathematics, theoretical parts of fields, etc.)
(b) Does not make empirical claims, and has no universally agreed rules for telling good from bad.
Some philosophy is in (a) and some in (b). Most statistics is in (a), for example.
Re: (a), most folks would need a lot of study to evaluate claims, typically at the graduate level. So the best thing to do is get the lay of the land by asking experts. Experts may disagree, of course, which is valuable information.
Re: (b), why are we talking about (b) at all?