For your first half-question, “A Technical Explanation of Technical Explanation” [Edit: added link] sums up the big deal; Bayes Theorem is part of what actually underpins how to make a map reflect the territory (given infinite compute) - it is a necessary component. In comparison with other necessary components required to do this (i.e. logic, math, other basic probability) I would conjecture that Bayes is only special in that it is ‘often’ the last piece of the puzzle that is assembled in someone’s mind, and thus takes on psychological significance.
For the second half-question, I think using Bayes in life is more about understanding just how much priors matter than about actually crunching the numbers.
As an example, commenters on LessWrong will sometimes refer to ‘Outside View’ vs ‘Inside View’.
A quick example to summarise roughly what these mean. If predicting how long ‘Project X’ will take, an Outside View goes ‘well, project A took 3 weeks, and project B took 4, and project C took 3, and this project is similar, so 3-4 weeks’, whereas the Inside View goes ‘well to do project X, I need to do subprojects Y1, Y2, Y3 and Y4, and these should take me 3, 4, 5 and 4 days respectively, so 16 days = 2 and a bit weeks’. The Inside View is susceptible to the planning fallacy, etc etc.
General perspective: Outside View Good, Inside View Can Go Very Wrong.
You’ve probable guessed the punchline; this is really about Bayes Theorem. The Outside View goes “I’m going to use previous projects as my prior (technically this forms a prior distribution of estimated project lengths), and then just go with that and try to avoid updating very much, because I have a second prior that says ’all the details of similar projects didn’t matter in the past, so I shouldn’t pay too much attention to them now”, whereas the Inside View Going Very Wrong is what happens when you throw out your priors; you can end up badly calibrated very quickly.
I’ve skimmed over A Technical Explanation of Technical Explanation (you can make links end do over stuff by selecting the text you want to edit (as if you want to copy it); if your browser is compatible, toolbar should appear). I think that’s the first time in my life when I’ve found out that I need to know more math to understand non-mathematical text. The text is not about Bayes’ Theorem, but it is about application of probability theory to reasoning, which is relevant to my question. As far as I understand, Yudkowski writes about the same algorithm that Vladimir_Nesov describes in his answer to my question. Some nice properties of the algorithm are proved, but not very rigorously. I don’t know how to fix it, which is not very surprising, since I know very little about statistics. In fact, I am now half-convinced to take a course or something like that. Thank you for that.
As for the other part of your answer, it actually makes me even more confused. You are saying “using Bayes in life is more about understanding just how much priors matter than about actually crunching the numbers”. To me it sounds similar to “using steel in life is more about understanding just how much whole can be greater than the sum of its parts than about actually making things from some metal”. I mean, there is nothing inherently wrong with using a concept as a metaphor and/or inspiration. But it can sometimes cause miscommunication. And I am under impression that some people here (not only me) talk about Bayes’ Theorem in a very literal sense.
For your first half-question, “A Technical Explanation of Technical Explanation” [Edit: added link] sums up the big deal; Bayes Theorem is part of what actually underpins how to make a map reflect the territory (given infinite compute) - it is a necessary component. In comparison with other necessary components required to do this (i.e. logic, math, other basic probability) I would conjecture that Bayes is only special in that it is ‘often’ the last piece of the puzzle that is assembled in someone’s mind, and thus takes on psychological significance.
For the second half-question, I think using Bayes in life is more about understanding just how much priors matter than about actually crunching the numbers.
As an example, commenters on LessWrong will sometimes refer to ‘Outside View’ vs ‘Inside View’.
A quick example to summarise roughly what these mean. If predicting how long ‘Project X’ will take, an Outside View goes ‘well, project A took 3 weeks, and project B took 4, and project C took 3, and this project is similar, so 3-4 weeks’, whereas the Inside View goes ‘well to do project X, I need to do subprojects Y1, Y2, Y3 and Y4, and these should take me 3, 4, 5 and 4 days respectively, so 16 days = 2 and a bit weeks’. The Inside View is susceptible to the planning fallacy, etc etc.
General perspective: Outside View Good, Inside View Can Go Very Wrong.
You’ve probable guessed the punchline; this is really about Bayes Theorem. The Outside View goes “I’m going to use previous projects as my prior (technically this forms a prior distribution of estimated project lengths), and then just go with that and try to avoid updating very much, because I have a second prior that says ’all the details of similar projects didn’t matter in the past, so I shouldn’t pay too much attention to them now”, whereas the Inside View Going Very Wrong is what happens when you throw out your priors; you can end up badly calibrated very quickly.
I’ve skimmed over A Technical Explanation of Technical Explanation (you can make links end do over stuff by selecting the text you want to edit (as if you want to copy it); if your browser is compatible, toolbar should appear). I think that’s the first time in my life when I’ve found out that I need to know more math to understand non-mathematical text. The text is not about Bayes’ Theorem, but it is about application of probability theory to reasoning, which is relevant to my question. As far as I understand, Yudkowski writes about the same algorithm that Vladimir_Nesov describes in his answer to my question. Some nice properties of the algorithm are proved, but not very rigorously. I don’t know how to fix it, which is not very surprising, since I know very little about statistics. In fact, I am now half-convinced to take a course or something like that. Thank you for that.
As for the other part of your answer, it actually makes me even more confused. You are saying “using Bayes in life is more about understanding just how much priors matter than about actually crunching the numbers”. To me it sounds similar to “using steel in life is more about understanding just how much whole can be greater than the sum of its parts than about actually making things from some metal”. I mean, there is nothing inherently wrong with using a concept as a metaphor and/or inspiration. But it can sometimes cause miscommunication. And I am under impression that some people here (not only me) talk about Bayes’ Theorem in a very literal sense.
There are lots of ways of making the same point without bringing in Bayes.