I’m not sure what (2) is getting at here. It seems like if a simulator noticed that it was being asked to simulate an (equally smart or smarter) simulator, then “simulate even better” seems like a fixed point. In order for it to begin behaving like an unaligned agentic AGI (without e.g. being prompted to take optimal actions a la “Optimality is the Tiger and Agents are its Teeth”), it first needs to believe that limn→∞GPT-n is an agent, doesn’t it? Otherwise this simulating-fixed-point seems like it might cause this self-awareness to be benign.
I’m not sure what (2) is getting at here. It seems like if a simulator noticed that it was being asked to simulate an (equally smart or smarter) simulator, then “simulate even better” seems like a fixed point. In order for it to begin behaving like an unaligned agentic AGI (without e.g. being prompted to take optimal actions a la “Optimality is the Tiger and Agents are its Teeth”), it first needs to believe that limn→∞GPT-n is an agent, doesn’t it? Otherwise this simulating-fixed-point seems like it might cause this self-awareness to be benign.