“If Lincoln were not assassinated, he would not have been impeached” is a probabilistic statement that is not at all about THE Lincoln. It’s a reference class analysis of leaders who did not succumb to premature death and had the leadership, economy etc. metrics similar to the one Lincoln. There is no “counterfactual” there in any interesting sense. It is not about the minute details of avoiding the assassination. If you state the apparent counterfactual more precisely, it would be something like
There is a 90% probability of a ruler with [list of characteristics matching Lincoln, according to some criteria] serving out his term.
So, there is no issue with “If 0=1...” here, unlike with the other one, “If the modularity theorem were false”, which implies some changes in the very basics of mathematics, though one can also argue for the reference class approach there.
I feel like this is practically a frequentist/bayesian disagreement :D It seems “obvious” to me that “If Lincoln were not assassinated, he would not have been impeached” can be about the real Lincoln as much as me saying “Lincoln had a beard” is, because both are statements made using my model of the world about this thing I label Lincoln. No reference class necessary.
I am not sure if labels help here. I’m simply pointing out that logical counterfactuals applied to the “real Lincoln” lead to the sort of issues MIRI is facing right now when trying to make progress in the theoretical AI alignment issues. The reference class approach removes the difficulties, but then it is hard to apply it to the “mathematical facts”, like what is the probability of 100...0th digit of pi being 0 or, to quote the OP “If the Modularity Theorem were false...” and the prevailing MIRI philosophy does not allow treating logical uncertainty as environmental.
Sure. In the case of Lincoln, I would say the problem is solved by models even as clean as Pearl-ian causal networks. But in math, there’s no principled causal network model of theorems to support counterfactual reasoning as causal calculus.
Of course, I more or less just think that we have an unprincipled causality-like view of math that we take when we think about mathematical counterfactuals, but it’s not clear that this is any help to MIRI understanding proof-based AI.
“If Lincoln were not assassinated, he would not have been impeached” is a probabilistic statement that is not at all about THE Lincoln. It’s a reference class analysis of leaders who did not succumb to premature death and had the leadership, economy etc. metrics similar to the one Lincoln. There is no “counterfactual” there in any interesting sense. It is not about the minute details of avoiding the assassination. If you state the apparent counterfactual more precisely, it would be something like
So, there is no issue with “If 0=1...” here, unlike with the other one, “If the modularity theorem were false”, which implies some changes in the very basics of mathematics, though one can also argue for the reference class approach there.
I feel like this is practically a frequentist/bayesian disagreement :D It seems “obvious” to me that “If Lincoln were not assassinated, he would not have been impeached” can be about the real Lincoln as much as me saying “Lincoln had a beard” is, because both are statements made using my model of the world about this thing I label Lincoln. No reference class necessary.
I am not sure if labels help here. I’m simply pointing out that logical counterfactuals applied to the “real Lincoln” lead to the sort of issues MIRI is facing right now when trying to make progress in the theoretical AI alignment issues. The reference class approach removes the difficulties, but then it is hard to apply it to the “mathematical facts”, like what is the probability of 100...0th digit of pi being 0 or, to quote the OP “If the Modularity Theorem were false...” and the prevailing MIRI philosophy does not allow treating logical uncertainty as environmental.
Sure. In the case of Lincoln, I would say the problem is solved by models even as clean as Pearl-ian causal networks. But in math, there’s no principled causal network model of theorems to support counterfactual reasoning as causal calculus.
Of course, I more or less just think that we have an unprincipled causality-like view of math that we take when we think about mathematical counterfactuals, but it’s not clear that this is any help to MIRI understanding proof-based AI.
I don’t think I am following your argument. I am not sure what Pearl’s causal networks are and how they help here, so maybe I need to read up on it.