Simulated humans are not arbitrary Turing machines.
Arbitrary Turing machines are arbitrary simulated humans. If you want to cut the knot with a ‘human’ predicate, that’s just as undecidable.
Which also means that if you can prove that human suffering is non-computable, you basically prove that FAI is impossible.
There we have more strategies. For example, ‘prevent any current human from suffering or creating another human which might then suffer’.
Analogous to pain asymbolia, it should be possible to modify the simulated human to report (and possibly block) potential “suffering” without feeling it.
Is there a way to do this perfectly without running into undecidability? Even if you had the method, how would you know when to apply it...
Arbitrary Turing machines are arbitrary simulated humans. If you want to cut the knot with a ‘human’ predicate, that’s just as undecidable.
There we have more strategies. For example, ‘prevent any current human from suffering or creating another human which might then suffer’.
Is there a way to do this perfectly without running into undecidability? Even if you had the method, how would you know when to apply it...