The ideal FAI wouldn’t care about its personal identity over time;
Why not? If it’s trying to maximize human values, humans consider death to have negative value, and humans consider the FAI to be alive, then the FAI would try to die as little as possible, presumably by cloning itself less. It might clone itself a bunch early on so that it can prevent other people from dying and otherwise do enough good to make the sacrifice worth it, but it would still care about its personal identity over time.
You’re equivocating. Humans consider death of humans to have negative value. If the humans that create the FAI don’t assign negative value to AI death, then the FAI won’t either.
It’s not clear that humans wouldn’t assign negative value to AI death. It’s certainly intelligent. It’s not entirely clear what other requirements there are, and it’s not clear what requirements an AI would fulfill.
Why not? If it’s trying to maximize human values, humans consider death to have negative value, and humans consider the FAI to be alive, then the FAI would try to die as little as possible, presumably by cloning itself less. It might clone itself a bunch early on so that it can prevent other people from dying and otherwise do enough good to make the sacrifice worth it, but it would still care about its personal identity over time.
You’re equivocating. Humans consider death of humans to have negative value. If the humans that create the FAI don’t assign negative value to AI death, then the FAI won’t either.
It’s not clear that humans wouldn’t assign negative value to AI death. It’s certainly intelligent. It’s not entirely clear what other requirements there are, and it’s not clear what requirements an AI would fulfill.