That’s irrelevant. To see why one-boxing is important, we need to realize the general principle—that we can only impose a boundary condition on all computations-which-are-us (i.e. we can choose how both us and all perfect predictions of us choose, and both us and all the predictions have to choose the same). We can’t impose a boundary condition only on our brain (i.e. we can’t only choose how our brain decides while keeping everything else the same). This is necessarily true.
Without seeing this (and therefore knowing we should one-box), or even while being unaware of this principle altogether, there is no point in trying to have a “debate” about it.
About three quarters of academic decision theorists two box on Newcombe’s problem. So this standard seems nuts. Only 20% one box. https://survey2020.philpeople.org/survey/results/4886?aos=1399
That’s irrelevant. To see why one-boxing is important, we need to realize the general principle—that we can only impose a boundary condition on all computations-which-are-us (i.e. we can choose how both us and all perfect predictions of us choose, and both us and all the predictions have to choose the same). We can’t impose a boundary condition only on our brain (i.e. we can’t only choose how our brain decides while keeping everything else the same). This is necessarily true.
Without seeing this (and therefore knowing we should one-box), or even while being unaware of this principle altogether, there is no point in trying to have a “debate” about it.