You can also model the agent as failing to learn that its “unpredictable randomness” isn’t. It’s still the case that the simple analysis of “agents which can’t learn a true fact will fail in cases where that something matters” is good enough.
You can also model the agent as failing to learn that its “unpredictable randomness” isn’t. It’s still the case that the simple analysis of “agents which can’t learn a true fact will fail in cases where that something matters” is good enough.