by the problem of induction, it cannot know with full certainty whether it has achieved its goal
The problem of induction is not relevant here. The real is that finitely many bits of information cannot move a bayesian reasoner from p in (0,1) to p in {0,1}.
Strictly speaking the problem of induction is a deeper question concerning the justification of inductive methods; for the sake of clarity I’ve edited it to “due to the limits of induction”, though I find this to border semantic pedantry...
But the issue applies even to non-inductive knowledge. An AGI tasked to calculate pi to ten decimal places will still eat up the lightcone to check, due to the limits on deductive knowledge.
The problem of induction is not relevant here. The real is that finitely many bits of information cannot move a bayesian reasoner from p in (0,1) to p in {0,1}.
Strictly speaking the problem of induction is a deeper question concerning the justification of inductive methods; for the sake of clarity I’ve edited it to “due to the limits of induction”, though I find this to border semantic pedantry...
But the issue applies even to non-inductive knowledge. An AGI tasked to calculate pi to ten decimal places will still eat up the lightcone to check, due to the limits on deductive knowledge.
Fair point; I overlooked that aspect. In any case, I’ve removed the (redundant) sentence altogether.