I think this paragraph reflects a very serious confusion that is seen on LW regularly:
How strongly should I believe P? How should I adjust my probability for P in the face of new evidence X? There is a single, exactly correct answer to each such question, and it is provided by Bayes’ Theorem. We may never know the correct answer, but we can plug estimated numbers into the equation and update our beliefs accordingly.
Most of your beliefs are not produced by some process that you can break into its component parts and analyze mathematically so as to assign a numerical probability. Rather, they are produced by opaque black-box circuits in your brain, about whose internal functioning you know little or nothing. Often these circuits function very well and let you form very reliable judgments, but without the ability to reverse-engineer and analyze them in detail, which you presently don’t have, you cannot know what would be the correct probability (by any definition) assigned to their outputs, except for the vague feeling of certainty that they typically produce along with their results.
If instead of relying on your brain’s internal specialized black-box circuits you use some formal calculation procedure to produce probability estimates, then yes, these numbers can make sense. However, the important points are that: (1) the numbers produced this way do not pertain to the outputs of your brain’s opaque circuits, but only to the output of the formal procedure itself, and (2) these opaque circuits, as little as we know about how they actually work, very often produce much more reliable judgments than any formal models we have. Assigning probability numbers produced by explicit formal procedures to beliefs produced by opaque procedures in one’s head is a total fallacy, and discarding the latter in favor of the former makes it impossible to grapple with the real world at all.
I meant to capture some of what you’ve said here in the footnote included above, but let me see if I can get clear on the rest of what you’re saying...
I agree that beliefs are formed by a process that is currently, almost entirely opaque to us. But I’m not sure what you mean when you say that “the numbers produced this way to not pertain to the outputs of your brain’s opaque circuits, but only to the output of the formal procedure itself.” Well of course, but the point of what I’m saying is that I can try to revise my belief strength to correspond to the outputs of the formal process. Or, less mysteriously, I can make choices on the basis of personal utility estimates and the probabilistic outputs of the formal epistemological process. (That is, I can make some decisions on the basis of a formal decision procedure.)
You write that “Assigning probability numbers produced by explicit formal procedures to beliefs produced by opaque procedures in one’s head is a total fallacy...” But again, I’m not trying to say that I take the output of a formal procedure and then “assign” that value to my beliefs. Rather, I try to adjust my beliefs to the output of the formal procedure.
Again, I’m not trying to say that I use Bayes’ Theorem when guessing which way Starbucks is on the basis of three people’s conflicting testimony. But Bayes’ Theorem can be useful in a great many applications where one has time to use it.
But before I continue, let me check… perhaps I’ve misunderstood you?
It seems like I misunderstood your claim as somewhat stronger than what you actually meant. (Perhaps partly because I missed your footnotes—you might consider making them more conspicuous.)
Still, even now that I (hopefully) understand your position better, I disagree with it. The overwhelming part of our beliefs is based on opaque processes in our heads, and even in cases where we have workable formal models, the ultimate justification for why the model is a reliably accurate description of reality is typically (and arguably always) based on an opaque intuitive judgment. This is why despite the mathematical elegance of a Bayesian approach epistemology remains messy and difficult in practice.
Now, you say:
Whenever you use words like “likely” and “probable”, you are doing math. So stop pretending you aren’t doing math, and do the math correctly, according to the proven theorem of how probable P given X is – even if we are always burdened by uncertainty.
But in reality, it isn’t really “you” who’s doing the math—it’s some black-box module in your brain, so that you have access only to the end-product of this procedure. Typically you have no way at all to “do the math correctly,” because the best available formal procedure is likely to be altogether inferior to the ill-understood and opaque but effective mechanisms in your head, and its results will buy you absolutely nothing.
To take a mundane but instructive example, your brain constantly produces beliefs based on its modules for physics calculations, whose internals are completely opaque to you, but whose results are nevertheless highly accurate on average, or otherwise you’d soon injure or kill yourself. (Sometimes of course they are inaccurate and people injure or kill themselves.) In the overwhelming majority of cases, trying to supplement the results of these opaque calculations with some formal procedure is useless, since the relevant physics and physiology are far too complex. Most beliefs of any consequence are analogous to these, and even those that involve a significant role of formal models must in turn involve beliefs about the connection between the models and reality, themselves a product of opaque intuitions.
With this situation in mind, I believe that reducing epistemology to Bayesianism in the present situation is at best like reducing chemistry to physics: doable in principle, but altogether impractical.
I’m not sure how much we disagree. Obviously it all comes back to opaque brain processes in the end, and thus epistemology remains messy. I don’t think anything I said in my original post denies this.
As for a black-box module in my brain doing math, yes, that’s part of what I call “me.” What I’m doing there is responding to a common objection to Bayesianism—that it’s all “subjective.” Well yes, it requires subjective probability assessments. So does every method of epistemology. But at least with Bayesian methods you can mathematically model your uncertainty. That’s all I was trying to say, there, and I find it hard to believe that you disagree with that point. As far as I can tell, you’re extrapolating what I said far beyond what I intended to communicate with it.
As for reducing epistemology to Bayesianism, my footnote said it was impractical, and I also said it’s incomplete without cognitive science, which addresses the fact that, for example, our belief-forming processes remain mostly opaque to this day.
Fair enough. We don’t seem to disagree much then, if at all, when it comes to the correctness of what you wrote.
However, in that case, I would still object to your summary in that given the realistic limitations of our current position, we have to use all sorts of messy and questionable procedures to force our opaque and unreliable brains to yield workable and useful knowledge. With this in mind, saying that epistemology is reducible to cognitive science and Bayesian probability, however true in principle, is definitely not true in any practically useful sense. (The situation is actually much worse than in the analogous example of our practical inability to reduce chemistry to physics, since the insight necessary to perform the complete and correct reduction of epistemology, if it ever comes, will have to be somehow obtained using the tools of our present messy and unreliable epistemology.)
Therefore, what is missing from your summary is the statement of the messy and unreliable parts currently incorporated into your epistemology, which is a supremely relevant question precisely because they are so difficult to analyze and describe accurately, since their imperfections will interfere with the very process of their analysis. Another important consideration is that a bold reductionist position may lead one to dismiss too quickly various ideas that can offer a lot of useful insight in this present imperfect position, despite their metaphysical and other baggage.
I recently had an insight about this while taking a shower or something like that: the opaque circuits can get quite good at identifying the saliencies in a situation. For example, oftentimes the key to a solution just pops into my awareness. Other times, the 3 or so keys or clues I need to arrive at a solution just make themselves known to me through some process opaque to me.
These “saliency identification routines” are so reliable that in domains I am expert in, I can even arrive at a high degree of confidence that I have identified all the important considerations on which a decision turns without my having searched deliberately through even a small fraction of the factors and combinations of factors that impinge on the decision.
The observation I just made takes some of the sting out of Vladimir M’s pessimistic observations (most of the brain’s being opaque to introspection, the opaque parts’ not outputting numerical probabilities) because although a typical decision you or I face is impinged on by millions of factors, it usually turns on only 2 or 3.
Of course, you still have to train the opaque circuits (and ensure feedback from reality during training).
I’d like to see a post on this, especially if you have any insights or knowledge on how we can make those black-box circuits work better, or how to best combine formal probability calculations with those black-box circuits.
Well, that would be a very ambitious idea for an article! One angle I think might be worth exploring would be a classification of problems with regards to how the outputs of the black-box circuits (i.e. our intuitions) perform compared to the formal models we have. Clearly, among the problems we face in practice, we can point out great extremes in all four directions: problems can be trivial for both intuition and formal models, or altogether intractable, or easily solvable with formal models but awfully counterintuitive (e.g. the Monty Hall problem), or easily handled by intuition but outside of the reach of our present formal models (e.g. many AI-complete problems). I think a systematic classification along these lines might open the way for some general insight about how to best reconcile, and perhaps even combine productively, our intuitions with the best available formal calculations. But this is just a half-baked idea I have, which may or may not evolve into more systematic thoughts worth posting.
I think this paragraph reflects a very serious confusion that is seen on LW regularly:
Most of your beliefs are not produced by some process that you can break into its component parts and analyze mathematically so as to assign a numerical probability. Rather, they are produced by opaque black-box circuits in your brain, about whose internal functioning you know little or nothing. Often these circuits function very well and let you form very reliable judgments, but without the ability to reverse-engineer and analyze them in detail, which you presently don’t have, you cannot know what would be the correct probability (by any definition) assigned to their outputs, except for the vague feeling of certainty that they typically produce along with their results.
If instead of relying on your brain’s internal specialized black-box circuits you use some formal calculation procedure to produce probability estimates, then yes, these numbers can make sense. However, the important points are that: (1) the numbers produced this way do not pertain to the outputs of your brain’s opaque circuits, but only to the output of the formal procedure itself, and (2) these opaque circuits, as little as we know about how they actually work, very often produce much more reliable judgments than any formal models we have. Assigning probability numbers produced by explicit formal procedures to beliefs produced by opaque procedures in one’s head is a total fallacy, and discarding the latter in favor of the former makes it impossible to grapple with the real world at all.
I meant to capture some of what you’ve said here in the footnote included above, but let me see if I can get clear on the rest of what you’re saying...
I agree that beliefs are formed by a process that is currently, almost entirely opaque to us. But I’m not sure what you mean when you say that “the numbers produced this way to not pertain to the outputs of your brain’s opaque circuits, but only to the output of the formal procedure itself.” Well of course, but the point of what I’m saying is that I can try to revise my belief strength to correspond to the outputs of the formal process. Or, less mysteriously, I can make choices on the basis of personal utility estimates and the probabilistic outputs of the formal epistemological process. (That is, I can make some decisions on the basis of a formal decision procedure.)
You write that “Assigning probability numbers produced by explicit formal procedures to beliefs produced by opaque procedures in one’s head is a total fallacy...” But again, I’m not trying to say that I take the output of a formal procedure and then “assign” that value to my beliefs. Rather, I try to adjust my beliefs to the output of the formal procedure.
Again, I’m not trying to say that I use Bayes’ Theorem when guessing which way Starbucks is on the basis of three people’s conflicting testimony. But Bayes’ Theorem can be useful in a great many applications where one has time to use it.
But before I continue, let me check… perhaps I’ve misunderstood you?
It seems like I misunderstood your claim as somewhat stronger than what you actually meant. (Perhaps partly because I missed your footnotes—you might consider making them more conspicuous.)
Still, even now that I (hopefully) understand your position better, I disagree with it. The overwhelming part of our beliefs is based on opaque processes in our heads, and even in cases where we have workable formal models, the ultimate justification for why the model is a reliably accurate description of reality is typically (and arguably always) based on an opaque intuitive judgment. This is why despite the mathematical elegance of a Bayesian approach epistemology remains messy and difficult in practice.
Now, you say:
But in reality, it isn’t really “you” who’s doing the math—it’s some black-box module in your brain, so that you have access only to the end-product of this procedure. Typically you have no way at all to “do the math correctly,” because the best available formal procedure is likely to be altogether inferior to the ill-understood and opaque but effective mechanisms in your head, and its results will buy you absolutely nothing.
To take a mundane but instructive example, your brain constantly produces beliefs based on its modules for physics calculations, whose internals are completely opaque to you, but whose results are nevertheless highly accurate on average, or otherwise you’d soon injure or kill yourself. (Sometimes of course they are inaccurate and people injure or kill themselves.) In the overwhelming majority of cases, trying to supplement the results of these opaque calculations with some formal procedure is useless, since the relevant physics and physiology are far too complex. Most beliefs of any consequence are analogous to these, and even those that involve a significant role of formal models must in turn involve beliefs about the connection between the models and reality, themselves a product of opaque intuitions.
With this situation in mind, I believe that reducing epistemology to Bayesianism in the present situation is at best like reducing chemistry to physics: doable in principle, but altogether impractical.
I’m not sure how much we disagree. Obviously it all comes back to opaque brain processes in the end, and thus epistemology remains messy. I don’t think anything I said in my original post denies this.
As for a black-box module in my brain doing math, yes, that’s part of what I call “me.” What I’m doing there is responding to a common objection to Bayesianism—that it’s all “subjective.” Well yes, it requires subjective probability assessments. So does every method of epistemology. But at least with Bayesian methods you can mathematically model your uncertainty. That’s all I was trying to say, there, and I find it hard to believe that you disagree with that point. As far as I can tell, you’re extrapolating what I said far beyond what I intended to communicate with it.
As for reducing epistemology to Bayesianism, my footnote said it was impractical, and I also said it’s incomplete without cognitive science, which addresses the fact that, for example, our belief-forming processes remain mostly opaque to this day.
Fair enough. We don’t seem to disagree much then, if at all, when it comes to the correctness of what you wrote.
However, in that case, I would still object to your summary in that given the realistic limitations of our current position, we have to use all sorts of messy and questionable procedures to force our opaque and unreliable brains to yield workable and useful knowledge. With this in mind, saying that epistemology is reducible to cognitive science and Bayesian probability, however true in principle, is definitely not true in any practically useful sense. (The situation is actually much worse than in the analogous example of our practical inability to reduce chemistry to physics, since the insight necessary to perform the complete and correct reduction of epistemology, if it ever comes, will have to be somehow obtained using the tools of our present messy and unreliable epistemology.)
Therefore, what is missing from your summary is the statement of the messy and unreliable parts currently incorporated into your epistemology, which is a supremely relevant question precisely because they are so difficult to analyze and describe accurately, since their imperfections will interfere with the very process of their analysis. Another important consideration is that a bold reductionist position may lead one to dismiss too quickly various ideas that can offer a lot of useful insight in this present imperfect position, despite their metaphysical and other baggage.
The list of “what is missing from [my] summary” is indeed long! Hence, a “summary.”
I recently had an insight about this while taking a shower or something like that: the opaque circuits can get quite good at identifying the saliencies in a situation. For example, oftentimes the key to a solution just pops into my awareness. Other times, the 3 or so keys or clues I need to arrive at a solution just make themselves known to me through some process opaque to me.
These “saliency identification routines” are so reliable that in domains I am expert in, I can even arrive at a high degree of confidence that I have identified all the important considerations on which a decision turns without my having searched deliberately through even a small fraction of the factors and combinations of factors that impinge on the decision.
The observation I just made takes some of the sting out of Vladimir M’s pessimistic observations (most of the brain’s being opaque to introspection, the opaque parts’ not outputting numerical probabilities) because although a typical decision you or I face is impinged on by millions of factors, it usually turns on only 2 or 3.
Of course, you still have to train the opaque circuits (and ensure feedback from reality during training).
I’d like to see a post on this, especially if you have any insights or knowledge on how we can make those black-box circuits work better, or how to best combine formal probability calculations with those black-box circuits.
Well, that would be a very ambitious idea for an article! One angle I think might be worth exploring would be a classification of problems with regards to how the outputs of the black-box circuits (i.e. our intuitions) perform compared to the formal models we have. Clearly, among the problems we face in practice, we can point out great extremes in all four directions: problems can be trivial for both intuition and formal models, or altogether intractable, or easily solvable with formal models but awfully counterintuitive (e.g. the Monty Hall problem), or easily handled by intuition but outside of the reach of our present formal models (e.g. many AI-complete problems). I think a systematic classification along these lines might open the way for some general insight about how to best reconcile, and perhaps even combine productively, our intuitions with the best available formal calculations. But this is just a half-baked idea I have, which may or may not evolve into more systematic thoughts worth posting.