If the original person effectively assigns 0 or 1 “non-updateable probability” to some belief, or honestly doesn’t believe in objective reality, or believes in “subjective truth” of some kind, CEV is not necessarily going to “cure” them of it—especially not by force.
I think you’re skipping between levels hereabouts. CEV, the theoretical construct, might consider people so modified, even if a FAI based on CEV would not modify them. CEV is our values if we were better, but does not necessitate us actually getting better.
In this thread I always used CEV in the sense of an AI implementing CEV. (Sometimes you’ll see descriptions of what I don’t believe to be the standard interpretation of how such an AI would behave, where gRR suggests such behaviors and I reply.)
I think you’re skipping between levels hereabouts. CEV, the theoretical construct, might consider people so modified, even if a FAI based on CEV would not modify them. CEV is our values if we were better, but does not necessitate us actually getting better.
In this thread I always used CEV in the sense of an AI implementing CEV. (Sometimes you’ll see descriptions of what I don’t believe to be the standard interpretation of how such an AI would behave, where gRR suggests such behaviors and I reply.)