omnizoid’s post as an example of where not to take EY’s side was poorly chosen. He two-boxes on Newcomb’s problem and any confident statements he makes about rationality or decision theory should be, for that reason, ignored entirely.
Of course, you go meta, without claiming that he’s object-level right, but I’m not sure using an obviously wrong post to take his side on the meta level is a good idea.
Of course, you go meta, without claiming that he’s object-level right, but I’m not sure using an obviously wrong post to take his side on the meta level is a good idea.
So it’s precisely situations where contextualizing norms would oppose engagement in discussion of local validity where that kind of discussion promotes its own kind. Framing this activity as “taking a side” seems like the opposite of what’s going on.
That’s irrelevant. To see why one-boxing is important, we need to realize the general principle—that we can only impose a boundary condition on all computations-which-are-us (i.e. we can choose how both us and all perfect predictions of us choose, and both us and all the predictions have to choose the same). We can’t impose a boundary condition only on our brain (i.e. we can’t only choose how our brain decides while keeping everything else the same). This is necessarily true.
Without seeing this (and therefore knowing we should one-box), or even while being unaware of this principle altogether, there is no point in trying to have a “debate” about it.
omnizoid’s post as an example of where not to take EY’s side was poorly chosen. He two-boxes on Newcomb’s problem and any confident statements he makes about rationality or decision theory should be, for that reason, ignored entirely.
Of course, you go meta, without claiming that he’s object-level right, but I’m not sure using an obviously wrong post to take his side on the meta level is a good idea.
Digging into local issues for their own sake keeps arguments sane. There are norms that oppose this, put preconditions of context. A norm is weakened by not being fed with enactment of its endorsed pattern.
So it’s precisely situations where contextualizing norms would oppose engagement in discussion of local validity where that kind of discussion promotes its own kind. Framing this activity as “taking a side” seems like the opposite of what’s going on.
About three quarters of academic decision theorists two box on Newcombe’s problem. So this standard seems nuts. Only 20% one box. https://survey2020.philpeople.org/survey/results/4886?aos=1399
That’s irrelevant. To see why one-boxing is important, we need to realize the general principle—that we can only impose a boundary condition on all computations-which-are-us (i.e. we can choose how both us and all perfect predictions of us choose, and both us and all the predictions have to choose the same). We can’t impose a boundary condition only on our brain (i.e. we can’t only choose how our brain decides while keeping everything else the same). This is necessarily true.
Without seeing this (and therefore knowing we should one-box), or even while being unaware of this principle altogether, there is no point in trying to have a “debate” about it.