I’d parse the quote as meaning “Believing in something doesn’t make it true”, in which case it’s something that pretty much everyone on this site takes for granted, but that the average person hasn’t necessarily fully internalized. Yudkowsky felt the need to make a similar point near the end of this article, and philosophers as diverse as St. Anselm and William James have built entire epistemologies around the notion that faith is sufficient to justify belief, so obviously it’s a point that needs to be made.
I dunno about St. Anselm but I found James’s “The Will to Believe” essay reasonable as a matter of practical rationality. The sort of Bayesian epistemology that is Eliezer’s hallmark isn’t exactly fundamental, and the map-territory distinction isn’t either, so I don’t find it too surprising that e.g. Kantian epistemology looks a lot more like modern decision theory than it does Bayesian probability theory. I suspect a lot of “faith”-like behaviors don’t look nearly as insane when seen from this deeper perspective. So on one level we have day-to-day instrumental rationality where faith tends to make sense for the reasons James cites, and on a much deeper level there’s uncertainty about what beliefs really are except as the parts of your utility function that are meant for cooperation with other agents (ETA: similar to Kant’s categorical imperative). On top of that there are situations where you have to have something like faith, e.g. if you happen upon a Turing oracle and thus can’t verify if it’s telling you the truth or not but still want to do hypercomputation. Things like this make me hesitant to judge the merits of epistemological ideas like faith which I don’t yet understand very well.
I’d parse the quote as meaning “Believing in something doesn’t make it true”, in which case it’s something that pretty much everyone on this site takes for granted, but that the average person hasn’t necessarily fully internalized. Yudkowsky felt the need to make a similar point near the end of this article, and philosophers as diverse as St. Anselm and William James have built entire epistemologies around the notion that faith is sufficient to justify belief, so obviously it’s a point that needs to be made.
I dunno about St. Anselm but I found James’s “The Will to Believe” essay reasonable as a matter of practical rationality. The sort of Bayesian epistemology that is Eliezer’s hallmark isn’t exactly fundamental, and the map-territory distinction isn’t either, so I don’t find it too surprising that e.g. Kantian epistemology looks a lot more like modern decision theory than it does Bayesian probability theory. I suspect a lot of “faith”-like behaviors don’t look nearly as insane when seen from this deeper perspective. So on one level we have day-to-day instrumental rationality where faith tends to make sense for the reasons James cites, and on a much deeper level there’s uncertainty about what beliefs really are except as the parts of your utility function that are meant for cooperation with other agents (ETA: similar to Kant’s categorical imperative). On top of that there are situations where you have to have something like faith, e.g. if you happen upon a Turing oracle and thus can’t verify if it’s telling you the truth or not but still want to do hypercomputation. Things like this make me hesitant to judge the merits of epistemological ideas like faith which I don’t yet understand very well.
This sort of taxonomy seems to deserve a more thorough treatment in a separate post.