I can’t help but think of TRON 2 when considering the ethics of creating simulated humans that are functionally identical to biological humans. For those unfamiliar with the film, a world comprised of data is inherently sufficient to enable the spontaneous generation of human-like entities. The creator of the data world finds the entities too imperfect, and creates a data world version of himself tasked with making the data world perfect according to an arcane definition for ‘perfection’ the creator himself has not fully formed. The data world version of the creator then begins mass genocide of the entities, creating human-like programs that are merely perfect executions of crafted code to replace them; if the programs exhibit individuality, they are deleted. The movie asserts this genocide is wrong.
If an AI is sufficiently powerful enough to be capable of mass-generating simulations that are functionally identical to a biological human, such that they are capable of original ideas, compassion, and suffering; if an AI can create simulated humans unique enough that their thoughts and actions over thousands of iterations of the same event are not predictable with 100% accuracy; then would it not be generating Homo sapiens sapiensen masse?
If indeed not, then I fail to see why mass creation and subsequent genocide over many iterations is the sort of behaviour mitigators of computational hazards wish to encourage.
I can’t help but think of TRON 2 when considering the ethics of creating simulated humans that are functionally identical to biological humans. For those unfamiliar with the film, a world comprised of data is inherently sufficient to enable the spontaneous generation of human-like entities. The creator of the data world finds the entities too imperfect, and creates a data world version of himself tasked with making the data world perfect according to an arcane definition for ‘perfection’ the creator himself has not fully formed. The data world version of the creator then begins mass genocide of the entities, creating human-like programs that are merely perfect executions of crafted code to replace them; if the programs exhibit individuality, they are deleted. The movie asserts this genocide is wrong.
If an AI is sufficiently powerful enough to be capable of mass-generating simulations that are functionally identical to a biological human, such that they are capable of original ideas, compassion, and suffering; if an AI can create simulated humans unique enough that their thoughts and actions over thousands of iterations of the same event are not predictable with 100% accuracy; then would it not be generating Homo sapiens sapiens en masse?
If indeed not, then I fail to see why mass creation and subsequent genocide over many iterations is the sort of behaviour mitigators of computational hazards wish to encourage.
Off topic, but the TRON sequal has at least two distinct friendly AI failures.
Flynn creates CLU and gives him simple-sounding goals, which ends badly.
Flynn’s original creation of the grid gives rise to unexpected and uncontrolled intelligence of at least human level.