Simulated humans are not arbitrary Turing machines.
We still don’t have guaranteed decidability for properties of simulations.
To make any progress toward FAI, one has to figure out how to define human suffering,
There are so many problems in FAI that have nothing to do with defining human suffering or any other object level moral terms. Metaethics, goal invariant self modification, value learning and extrapolation, avoiding wireheading, self deception, blackmail, self fulfilling prophecies, representing logical uncertainty correctly, finding a satisfactory notion of truth, and many more.
Which also means that if you can prove that human suffering is non-computable, you basically prove that FAI is impossible
This sounds like an appeal to consequences, but putting that aside: Undecidability is a limitation of minds in general, not just FAI, and yet, behold!, quite productive, non-oracular AI researchers exist. Do you know that we can compute uncomputable information? Don’t declare things impossible so quickly. We know that friendlier-than-torturing-everyoine AI is possible. No dream of FAI should fall short of that, even if FAI is “impossible”.
Real humans don’t take a lot of CPU cycles to identify and report suffering, so neither should simulated humans.
Even restricting simulated minds to things that looks like present humans, what makes you think that humans have any general capacity to recognize their own suffering? Most mental activity is not consciously perceived.
We still don’t have guaranteed decidability for properties of simulations.
There are so many problems in FAI that have nothing to do with defining human suffering or any other object level moral terms. Metaethics, goal invariant self modification, value learning and extrapolation, avoiding wireheading, self deception, blackmail, self fulfilling prophecies, representing logical uncertainty correctly, finding a satisfactory notion of truth, and many more.
This sounds like an appeal to consequences, but putting that aside: Undecidability is a limitation of minds in general, not just FAI, and yet, behold!, quite productive, non-oracular AI researchers exist. Do you know that we can compute uncomputable information? Don’t declare things impossible so quickly. We know that friendlier-than-torturing-everyoine AI is possible. No dream of FAI should fall short of that, even if FAI is “impossible”.
Even restricting simulated minds to things that looks like present humans, what makes you think that humans have any general capacity to recognize their own suffering? Most mental activity is not consciously perceived.