IIRC you can make it significantly more difficult with certain approaches, e.g. there’s an OAI approach that uses zero-knowledge proofs and that seemed pretty sound upon first inspection, but as far as I know the current best answer is no. But you might want to try to answer the question yourself, IMO it’s fun to think about from a cryptographic perspective.
Probably (in practice; in theory it looks like a natural aspect of decision-making); this is too poorly understood to say what specifically is necessary. I expect that if we could safely run experiments, it’d be relatively easy to find a well-behaving setup (in the sense of not generating predictions that are self-fulfilling to any significant extent; generating good/useful predictions is another matter), but that strategy isn’t helpful when a failed experiment destroys the world.
Wow, OK. Is it possible to rig the decision theory to rule out acausal trade?
IIRC you can make it significantly more difficult with certain approaches, e.g. there’s an OAI approach that uses zero-knowledge proofs and that seemed pretty sound upon first inspection, but as far as I know the current best answer is no. But you might want to try to answer the question yourself, IMO it’s fun to think about from a cryptographic perspective.
Probably (in practice; in theory it looks like a natural aspect of decision-making); this is too poorly understood to say what specifically is necessary. I expect that if we could safely run experiments, it’d be relatively easy to find a well-behaving setup (in the sense of not generating predictions that are self-fulfilling to any significant extent; generating good/useful predictions is another matter), but that strategy isn’t helpful when a failed experiment destroys the world.