Is it really a deontological spin? Seems like the rationale is that we value long-term calibration, coherence, and winning on meaningful and important problems, and we don’t want to sacrifice that for short-term calibration/coherence/winning on less meaningful or important problems. Still seems consequentialist to me.
Is it really a deontological spin? Seems like the rationale is that we value long-term calibration, coherence, and winning on meaningful and important problems, and we don’t want to sacrifice that for short-term calibration/coherence/winning on less meaningful or important problems. Still seems consequentialist to me.