In this blog context, I suppose one could argue, it is a suggestion that it would not necessarily be a bad thing for a superintelligence to simulate humans/sentient-beings in unpleasant or unhappy situations if the dis-utilities of the simulated humans are not merely very much tinier than the utilities the superintelligence gets from the results of the simulation.
Shard: it’s a theodicy; specifically, I think it’s a divine plan theodicy ( https://secure.wikimedia.org/wikipedia/en/wiki/Theodicy#God.27s_divine_plan_is_good_.E2.80.94_no_theodicy_is_needed ). (This isn’t surprising, given Wolfe’s intellectual interests.)
In this blog context, I suppose one could argue, it is a suggestion that it would not necessarily be a bad thing for a superintelligence to simulate humans/sentient-beings in unpleasant or unhappy situations if the dis-utilities of the simulated humans are not merely very much tinier than the utilities the superintelligence gets from the results of the simulation.