So there’s two different facets of the hypothetical ancestor simulation response I came up with.
A) deliberately not being a paradise
B) not connecting it to some broader network of simulation paradises.
I can totally buy coming to believe the first part is pointlessly cruel. The second part feels more like its… actually enforcing boundaries for the safety of others.
The ‘infinite energy’ clause is a bit weird here. If ‘you’ have total control over not just infinite energy but also the entire posthuman world, then yeah you can do things like let Hitler wander around making new allies and… somehow intervene if this starts to go awry. But I have an easier time imagining being confident in ‘not let Hitler out of the box until he’s trustworthy’ then the latter. (Ie there can be infinite energy ‘around’ but not actually in a uniform control)
Also, it’s not obvious to me which is more cruel. (I think it depends on Hitler’s own values)
Also, while I said ‘infinite energy’ in the hypothetical, I do think in most optimistic worlds we still end up with only ‘very large finite energy’ and I don’t even know that I’d get around to doing any kind of ancestor sim at all for him, let alone getting to optimize it fully for him. I think I love Hitler, but I also think I love everyone else and it just seems reasonable to prioritize both the safety and well being of people who didn’t go out of their way to create horrific death camps, and manipulate their way into power.
You raise two very valid concerns. That Hitler might hurt others if you allow him to interact with them, and that Hitler might find a way to escape the box.
Even if Hitler was willing to reflect on his actions and change, his presence in the network (B) would likely make other people unhappy.
So while I think (A) is ethically mandatory if you can contain him, (B) comes with a lot of complex problems that might not be solvable.
So there’s two different facets of the hypothetical ancestor simulation response I came up with.
A) deliberately not being a paradise B) not connecting it to some broader network of simulation paradises.
I can totally buy coming to believe the first part is pointlessly cruel. The second part feels more like its… actually enforcing boundaries for the safety of others.
The ‘infinite energy’ clause is a bit weird here. If ‘you’ have total control over not just infinite energy but also the entire posthuman world, then yeah you can do things like let Hitler wander around making new allies and… somehow intervene if this starts to go awry. But I have an easier time imagining being confident in ‘not let Hitler out of the box until he’s trustworthy’ then the latter. (Ie there can be infinite energy ‘around’ but not actually in a uniform control)
Also, it’s not obvious to me which is more cruel. (I think it depends on Hitler’s own values)
Also, while I said ‘infinite energy’ in the hypothetical, I do think in most optimistic worlds we still end up with only ‘very large finite energy’ and I don’t even know that I’d get around to doing any kind of ancestor sim at all for him, let alone getting to optimize it fully for him. I think I love Hitler, but I also think I love everyone else and it just seems reasonable to prioritize both the safety and well being of people who didn’t go out of their way to create horrific death camps, and manipulate their way into power.
You raise two very valid concerns. That Hitler might hurt others if you allow him to interact with them, and that Hitler might find a way to escape the box.
Even if Hitler was willing to reflect on his actions and change, his presence in the network (B) would likely make other people unhappy.
So while I think (A) is ethically mandatory if you can contain him, (B) comes with a lot of complex problems that might not be solvable.