That would seem to be an odd notion of “faith”; is the translation untrue to the original or is Nietzsche just being typically provocative? (I also personally don’t see how the quote is at all profound or interesting but that’s a separate issue and more a matter of taste.)
I apologize for practicing inferior epistemic hygiene. Thank you for indirectly bringing this to my attention. I knew that the quote was commonly attributed to Nietzsche, but I had never seen the original source. It would seem to be a rephrasing of this quote from The Antichrist:
The fact that faith, under certain circumstances, may work for blessedness, but that this blessedness produced by an idée fixe by no means makes the idea itself true, and the fact that faith actually moves no mountains, but instead raises them up where there were none before: all this is made sufficiently clear by a walk through a lunatic asylum.
I’d parse the quote as meaning “Believing in something doesn’t make it true”, in which case it’s something that pretty much everyone on this site takes for granted, but that the average person hasn’t necessarily fully internalized. Yudkowsky felt the need to make a similar point near the end of this article, and philosophers as diverse as St. Anselm and William James have built entire epistemologies around the notion that faith is sufficient to justify belief, so obviously it’s a point that needs to be made.
I dunno about St. Anselm but I found James’s “The Will to Believe” essay reasonable as a matter of practical rationality. The sort of Bayesian epistemology that is Eliezer’s hallmark isn’t exactly fundamental, and the map-territory distinction isn’t either, so I don’t find it too surprising that e.g. Kantian epistemology looks a lot more like modern decision theory than it does Bayesian probability theory. I suspect a lot of “faith”-like behaviors don’t look nearly as insane when seen from this deeper perspective. So on one level we have day-to-day instrumental rationality where faith tends to make sense for the reasons James cites, and on a much deeper level there’s uncertainty about what beliefs really are except as the parts of your utility function that are meant for cooperation with other agents (ETA: similar to Kant’s categorical imperative). On top of that there are situations where you have to have something like faith, e.g. if you happen upon a Turing oracle and thus can’t verify if it’s telling you the truth or not but still want to do hypercomputation. Things like this make me hesitant to judge the merits of epistemological ideas like faith which I don’t yet understand very well.
“A casual stroll through the lunatic asylum shows that faith does not prove anything.”
Friedrich Nietzsche
That would seem to be an odd notion of “faith”; is the translation untrue to the original or is Nietzsche just being typically provocative? (I also personally don’t see how the quote is at all profound or interesting but that’s a separate issue and more a matter of taste.)
I apologize for practicing inferior epistemic hygiene. Thank you for indirectly bringing this to my attention. I knew that the quote was commonly attributed to Nietzsche, but I had never seen the original source. It would seem to be a rephrasing of this quote from The Antichrist:
Ah, that sounds a bit more like the Nietzsche I know and kinda like! Thanks for digging up the more accurate quote.
I’d parse the quote as meaning “Believing in something doesn’t make it true”, in which case it’s something that pretty much everyone on this site takes for granted, but that the average person hasn’t necessarily fully internalized. Yudkowsky felt the need to make a similar point near the end of this article, and philosophers as diverse as St. Anselm and William James have built entire epistemologies around the notion that faith is sufficient to justify belief, so obviously it’s a point that needs to be made.
I dunno about St. Anselm but I found James’s “The Will to Believe” essay reasonable as a matter of practical rationality. The sort of Bayesian epistemology that is Eliezer’s hallmark isn’t exactly fundamental, and the map-territory distinction isn’t either, so I don’t find it too surprising that e.g. Kantian epistemology looks a lot more like modern decision theory than it does Bayesian probability theory. I suspect a lot of “faith”-like behaviors don’t look nearly as insane when seen from this deeper perspective. So on one level we have day-to-day instrumental rationality where faith tends to make sense for the reasons James cites, and on a much deeper level there’s uncertainty about what beliefs really are except as the parts of your utility function that are meant for cooperation with other agents (ETA: similar to Kant’s categorical imperative). On top of that there are situations where you have to have something like faith, e.g. if you happen upon a Turing oracle and thus can’t verify if it’s telling you the truth or not but still want to do hypercomputation. Things like this make me hesitant to judge the merits of epistemological ideas like faith which I don’t yet understand very well.
This sort of taxonomy seems to deserve a more thorough treatment in a separate post.