“by the time the AI is smart enough to do that, it will be smart enough not to”
I still don’t quite grasp why this isn’t an adequate answer. If an FAI shares our CEV, it won’t want to simulate zillions of conscious people in order to put them through great torture, and it will figure out how to avoid it. Is it simply that it may take the simulated torture of zillions for the FAI to figure this out? I don’t see any reason to think that we will find this problem very much easier to solve than a massively powerful AI.
I’m also not wholly convinced that the only ethical way to treat simulacra is never to create them, but I need to think about that one further.
“by the time the AI is smart enough to do that, it will be smart enough not to”
I still don’t quite grasp why this isn’t an adequate answer. If an FAI shares our CEV, it won’t want to simulate zillions of conscious people in order to put them through great torture, and it will figure out how to avoid it. Is it simply that it may take the simulated torture of zillions for the FAI to figure this out? I don’t see any reason to think that we will find this problem very much easier to solve than a massively powerful AI.
I’m also not wholly convinced that the only ethical way to treat simulacra is never to create them, but I need to think about that one further.