The connection to omega isn’t so clear. It looks like it could just be concealing some basic intuitions about computability and approximation. It seems like a way of smuggling in mysticism, which is misleading by being superfluous rather than incoherent.
I thought about it some more and remembered one connection. I’ll post it to the discussion section if it makes sense upon reflection. The basic idea is that Agent X can manipulate the prior of Agent Y but not its preferences, so Agent X gives Agent Y a perverse prior that forces it to optimize for the preferences of Agent X. Running this in reverse gives us a notion of an objectively false preference.
I thought about it some more and remembered one connection. I’ll post it to the discussion section if it makes sense upon reflection. The basic idea is that Agent X can manipulate the prior of Agent Y but not its preferences, so Agent X gives Agent Y a perverse prior that forces it to optimize for the preferences of Agent X. Running this in reverse gives us a notion of an objectively false preference.