You won’t get it in this echo chamber, and I hate to spend my own scarce karma to tell you this, but you are definitely on to something. Building FAI’s so that they get problems right which can never occur in the real world IS a problem. It seems clear enough that the harder problem than getting an FAI to one-box Newcomb is getting an FAI to correctly determine when it has enough evidence to believe that a particular claimant to being Omega is legit. In the absence of a proper Omega detector (or Omega-conman rejector), the FAI will be scammed by entities, some of whom may be unfriendly.
I’m not sure you understand my position correctly, but I definitely don’t care about FAI or UFAI at this point. This is a mathematical problem with a correct solution that depends on whether you consider backwards causality or not; what implications this solution has is of no interest to me.
You won’t get it in this echo chamber, and I hate to spend my own scarce karma to tell you this, but you are definitely on to something. Building FAI’s so that they get problems right which can never occur in the real world IS a problem. It seems clear enough that the harder problem than getting an FAI to one-box Newcomb is getting an FAI to correctly determine when it has enough evidence to believe that a particular claimant to being Omega is legit. In the absence of a proper Omega detector (or Omega-conman rejector), the FAI will be scammed by entities, some of whom may be unfriendly.
I’m not sure you understand my position correctly, but I definitely don’t care about FAI or UFAI at this point. This is a mathematical problem with a correct solution that depends on whether you consider backwards causality or not; what implications this solution has is of no interest to me.