I’m specifically criticizing something I’ve seen in the formal alignment formalisms. coarse graining is not relevant—only things that need to find the ai in a world sim from scratch have the problem I’m describing. if you can find the ai in a finite sim that just stores 4d boundary conditions at the edges in all directions, then you don’t have this problem.
Would your argument hold if the world were partially coarse-grained predictable from inside?
I’m specifically criticizing something I’ve seen in the formal alignment formalisms. coarse graining is not relevant—only things that need to find the ai in a world sim from scratch have the problem I’m describing. if you can find the ai in a finite sim that just stores 4d boundary conditions at the edges in all directions, then you don’t have this problem.