It looks like the issue here is that while Omega is ostensibly not taking into account your decision theory, it implicitly is by simulating an XDT agent. So a first patch would be to define simulations of a specific decision theory (as opposed to simulations of a given agent) as “unfair”.
On the other hand, we can’t necessarily know if a given computation is effectively equivalent to simulating a given decision theory. Even if the string “TDT” is never encoded anywhere in Omega’s super-neurons, it might still be simulating a TDT agent, for example.
On the first hand again, it might be easy for most problems to figure out whether anyone is implicitly favouring one DT over another, and thus whether they’re “fair”.
It looks like the issue here is that while Omega is ostensibly not taking into account your decision theory, it implicitly is by simulating an XDT agent. So a first patch would be to define simulations of a specific decision theory (as opposed to simulations of a given agent) as “unfair”.
On the other hand, we can’t necessarily know if a given computation is effectively equivalent to simulating a given decision theory. Even if the string “TDT” is never encoded anywhere in Omega’s super-neurons, it might still be simulating a TDT agent, for example.
On the first hand again, it might be easy for most problems to figure out whether anyone is implicitly favouring one DT over another, and thus whether they’re “fair”.