I’m thinking of this as “updating on whether I actually occupy the epistemic state that I think I occupy”, which one hopes would be less of a problem for a superintelligence than for a human.
I expect it to be a problem—probably as serious—for superintelligence. The universe will always be bigger and more complex than any model of it, and I’m pretty sure a mind can’t fully model itself.
Superintelligences will presumably have epistemic problems we can’t understand, and probably better tools for working on them, but unless I’m missing something, there’s no way to make the problem go away.
Yeah, but at least it shouldn’t have all the subconscious signaling problems that compromise conscious reasoning in humans- at least I hope nobody would be dumb enough to build a superintelligence that deceives itself on account of social adaptations that don’t update when the context changes...
I’m thinking of this as “updating on whether I actually occupy the epistemic state that I think I occupy”, which one hopes would be less of a problem for a superintelligence than for a human.
It reminds me of Yvain’s Confidence Levels Inside and Outside an Argument.
I expect it to be a problem—probably as serious—for superintelligence. The universe will always be bigger and more complex than any model of it, and I’m pretty sure a mind can’t fully model itself.
Superintelligences will presumably have epistemic problems we can’t understand, and probably better tools for working on them, but unless I’m missing something, there’s no way to make the problem go away.
Yeah, but at least it shouldn’t have all the subconscious signaling problems that compromise conscious reasoning in humans- at least I hope nobody would be dumb enough to build a superintelligence that deceives itself on account of social adaptations that don’t update when the context changes...