CEV is supposed to aim for the optimal future, not a satisficing future. My guess is that there is only one possible optimal future for any individual, unless there is a theoretical upper limit to individual utility and the FAI has sufficiently vast resources.
Also, if the terminal goals for both humans and dogs are to simply experience maximum subjective well-being for as long as possible, then their personal CEVs at least will be identical. However, since individuals are selfish, there’s no reason to expect that the ideal future for one individual will, if enacted by a FAI, lead to ideal futures for the other individuals who are not being extrapolated.
CEV is supposed to aim for the optimal future, not a satisficing future. My guess is that there is only one possible optimal future for any individual, unless there is a theoretical upper limit to individual utility and the FAI has sufficiently vast resources.
Also, if the terminal goals for both humans and dogs are to simply experience maximum subjective well-being for as long as possible, then their personal CEVs at least will be identical. However, since individuals are selfish, there’s no reason to expect that the ideal future for one individual will, if enacted by a FAI, lead to ideal futures for the other individuals who are not being extrapolated.