I also think that considering the particular topics is helpful here. In the math book, you were pretty confident the statement was wrong once you discovered a clear formal proof, because essentially there’s nothing more to be said.
On the interpretation of quantum mechanics, since you believe we have almost all the relevant data we’ll ever have (save for observed superpositions of larger and larger objects) and the full criteria to decide between these hypotheses given that information, you again think that disagreement is unfounded.
(I suggest you make an exception in your analysis for Scott Aaronson et al, whose view as I understand it is that progress in his research is more important than holding the Best Justified Interpretation at all times, if the different interpretations don’t have consequences for that research; so he uses whatever one seems most helpful at the moment. This is more like asking a different valid question than getting the wrong answer to a question.)
But on the prospects for General AI in the next century, well, there’s all sort of data you don’t yet have that would greatly help, and others might have it; and updating according to Bayes on that data is intractable without significant assumptions. I think that explains your willingness to hear out Daniel Dennett (albeit with some skepticism).
Finally, I think that when it comes to religion you may be implicitly using the same second-order evaluation I’ve come around to. I still ascribe a nonzero chance to my old religion being true—I didn’t find a knockdown logical flaw or something completely impossible in my experience of the world. I just came to the conclusion I didn’t have a specific reason to believe it above others.
However, I’d refuse to give any such religion serious consideration from now on unless it became more than 50% probable to my current self, because taking up a serious religion changes one’s very practice of rationality by making doubt a disvalue. Spending too much thought on a religion can get you stuck there, and it was hard enough leaving the first time around. That’s a second-order phenomenon different from the others: taking the Copenhagen interpretation for a hypothesis doesn’t strongly prevent you from discarding it later.
My best probability of finding the truth lies in the space of nonreligious answers instead of within any particular religion, so I can’t let myself get drawn in. So I do form an object-level bias against religion (akin to your outright dismissal of Aumann), but it’s one I think is justified on a meta-level.
Eliezer,
I also think that considering the particular topics is helpful here. In the math book, you were pretty confident the statement was wrong once you discovered a clear formal proof, because essentially there’s nothing more to be said.
On the interpretation of quantum mechanics, since you believe we have almost all the relevant data we’ll ever have (save for observed superpositions of larger and larger objects) and the full criteria to decide between these hypotheses given that information, you again think that disagreement is unfounded.
(I suggest you make an exception in your analysis for Scott Aaronson et al, whose view as I understand it is that progress in his research is more important than holding the Best Justified Interpretation at all times, if the different interpretations don’t have consequences for that research; so he uses whatever one seems most helpful at the moment. This is more like asking a different valid question than getting the wrong answer to a question.)
But on the prospects for General AI in the next century, well, there’s all sort of data you don’t yet have that would greatly help, and others might have it; and updating according to Bayes on that data is intractable without significant assumptions. I think that explains your willingness to hear out Daniel Dennett (albeit with some skepticism).
Finally, I think that when it comes to religion you may be implicitly using the same second-order evaluation I’ve come around to. I still ascribe a nonzero chance to my old religion being true—I didn’t find a knockdown logical flaw or something completely impossible in my experience of the world. I just came to the conclusion I didn’t have a specific reason to believe it above others.
However, I’d refuse to give any such religion serious consideration from now on unless it became more than 50% probable to my current self, because taking up a serious religion changes one’s very practice of rationality by making doubt a disvalue. Spending too much thought on a religion can get you stuck there, and it was hard enough leaving the first time around. That’s a second-order phenomenon different from the others: taking the Copenhagen interpretation for a hypothesis doesn’t strongly prevent you from discarding it later.
My best probability of finding the truth lies in the space of nonreligious answers instead of within any particular religion, so I can’t let myself get drawn in. So I do form an object-level bias against religion (akin to your outright dismissal of Aumann), but it’s one I think is justified on a meta-level.