Simulated humans are not arbitrary Turing machines.
To make any progress toward FAI, one has to figure out how to define human suffering, including simulated human suffering. It might not be easy, but I see it as an unavoidable step. (Which also means that if you can prove that human suffering is non-computable, you basically prove that FAI is impossible.)
Analogous to pain asymbolia, it should be possible to modify the simulated human to report (and possibly block) potential “suffering” without feeling it.
Real humans don’t take a lot of CPU cycles to identify and report suffering, so neither should simulated humans.
A non-suffering agent might not be as good as one which had loved and lost, but it is certainly much more useful than a blanket prohibition against simulating humans, as proposed in the OP.
Simulated humans are not arbitrary Turing machines.
Arbitrary Turing machines are arbitrary simulated humans. If you want to cut the knot with a ‘human’ predicate, that’s just as undecidable.
Which also means that if you can prove that human suffering is non-computable, you basically prove that FAI is impossible.
There we have more strategies. For example, ‘prevent any current human from suffering or creating another human which might then suffer’.
Analogous to pain asymbolia, it should be possible to modify the simulated human to report (and possibly block) potential “suffering” without feeling it.
Is there a way to do this perfectly without running into undecidability? Even if you had the method, how would you know when to apply it...
I can’t help but think of TRON 2 when considering the ethics of creating simulated humans that are functionally identical to biological humans. For those unfamiliar with the film, a world comprised of data is inherently sufficient to enable the spontaneous generation of human-like entities. The creator of the data world finds the entities too imperfect, and creates a data world version of himself tasked with making the data world perfect according to an arcane definition for ‘perfection’ the creator himself has not fully formed. The data world version of the creator then begins mass genocide of the entities, creating human-like programs that are merely perfect executions of crafted code to replace them; if the programs exhibit individuality, they are deleted. The movie asserts this genocide is wrong.
If an AI is sufficiently powerful enough to be capable of mass-generating simulations that are functionally identical to a biological human, such that they are capable of original ideas, compassion, and suffering; if an AI can create simulated humans unique enough that their thoughts and actions over thousands of iterations of the same event are not predictable with 100% accuracy; then would it not be generating Homo sapiens sapiensen masse?
If indeed not, then I fail to see why mass creation and subsequent genocide over many iterations is the sort of behaviour mitigators of computational hazards wish to encourage.
Simulated humans are not arbitrary Turing machines.
We still don’t have guaranteed decidability for properties of simulations.
To make any progress toward FAI, one has to figure out how to define human suffering,
There are so many problems in FAI that have nothing to do with defining human suffering or any other object level moral terms. Metaethics, goal invariant self modification, value learning and extrapolation, avoiding wireheading, self deception, blackmail, self fulfilling prophecies, representing logical uncertainty correctly, finding a satisfactory notion of truth, and many more.
Which also means that if you can prove that human suffering is non-computable, you basically prove that FAI is impossible
This sounds like an appeal to consequences, but putting that aside: Undecidability is a limitation of minds in general, not just FAI, and yet, behold!, quite productive, non-oracular AI researchers exist. Do you know that we can compute uncomputable information? Don’t declare things impossible so quickly. We know that friendlier-than-torturing-everyoine AI is possible. No dream of FAI should fall short of that, even if FAI is “impossible”.
Real humans don’t take a lot of CPU cycles to identify and report suffering, so neither should simulated humans.
Even restricting simulated minds to things that looks like present humans, what makes you think that humans have any general capacity to recognize their own suffering? Most mental activity is not consciously perceived.
A few points:
Simulated humans are not arbitrary Turing machines.
To make any progress toward FAI, one has to figure out how to define human suffering, including simulated human suffering. It might not be easy, but I see it as an unavoidable step. (Which also means that if you can prove that human suffering is non-computable, you basically prove that FAI is impossible.)
Analogous to pain asymbolia, it should be possible to modify the simulated human to report (and possibly block) potential “suffering” without feeling it.
Real humans don’t take a lot of CPU cycles to identify and report suffering, so neither should simulated humans.
A non-suffering agent might not be as good as one which had loved and lost, but it is certainly much more useful than a blanket prohibition against simulating humans, as proposed in the OP.
Arbitrary Turing machines are arbitrary simulated humans. If you want to cut the knot with a ‘human’ predicate, that’s just as undecidable.
There we have more strategies. For example, ‘prevent any current human from suffering or creating another human which might then suffer’.
Is there a way to do this perfectly without running into undecidability? Even if you had the method, how would you know when to apply it...
I can’t help but think of TRON 2 when considering the ethics of creating simulated humans that are functionally identical to biological humans. For those unfamiliar with the film, a world comprised of data is inherently sufficient to enable the spontaneous generation of human-like entities. The creator of the data world finds the entities too imperfect, and creates a data world version of himself tasked with making the data world perfect according to an arcane definition for ‘perfection’ the creator himself has not fully formed. The data world version of the creator then begins mass genocide of the entities, creating human-like programs that are merely perfect executions of crafted code to replace them; if the programs exhibit individuality, they are deleted. The movie asserts this genocide is wrong.
If an AI is sufficiently powerful enough to be capable of mass-generating simulations that are functionally identical to a biological human, such that they are capable of original ideas, compassion, and suffering; if an AI can create simulated humans unique enough that their thoughts and actions over thousands of iterations of the same event are not predictable with 100% accuracy; then would it not be generating Homo sapiens sapiens en masse?
If indeed not, then I fail to see why mass creation and subsequent genocide over many iterations is the sort of behaviour mitigators of computational hazards wish to encourage.
Off topic, but the TRON sequal has at least two distinct friendly AI failures.
Flynn creates CLU and gives him simple-sounding goals, which ends badly.
Flynn’s original creation of the grid gives rise to unexpected and uncontrolled intelligence of at least human level.
We still don’t have guaranteed decidability for properties of simulations.
There are so many problems in FAI that have nothing to do with defining human suffering or any other object level moral terms. Metaethics, goal invariant self modification, value learning and extrapolation, avoiding wireheading, self deception, blackmail, self fulfilling prophecies, representing logical uncertainty correctly, finding a satisfactory notion of truth, and many more.
This sounds like an appeal to consequences, but putting that aside: Undecidability is a limitation of minds in general, not just FAI, and yet, behold!, quite productive, non-oracular AI researchers exist. Do you know that we can compute uncomputable information? Don’t declare things impossible so quickly. We know that friendlier-than-torturing-everyoine AI is possible. No dream of FAI should fall short of that, even if FAI is “impossible”.
Even restricting simulated minds to things that looks like present humans, what makes you think that humans have any general capacity to recognize their own suffering? Most mental activity is not consciously perceived.