I think the answer to this question concerns the Kolmogorov complexity of various things, and the utility function as well. What is the Kolmogorov complexity of 3^^^3 simulated people? What is the complexity of the program to generate the simulated people? What is the complexity of the threat, that for each of these 3^^^3 people, this particular man is capable of killing each of them? What sort of prior probability do we assign to “this man is capable of simulating 3^^^3 people, killing each of them, and willing to do so for $5”?
Similarly, the utility function for this calculation needs to be defined. Utility is usually calculated with decreasing marginal returns, such that we usually are described as having scope insensitivity. Likewise, we attach disproportionately lower utility to things with small chances. We’d probably also have lower utility for these 3^^^3 simulated people on account of their being simulated, being generated by a program with very low Kolmogorov complexity (ie, lack individuality and “realness”), and existing at the whim of a seemingly cruel and crazy person (meaning they’re probably doomed anyways).
There’s a few other things to consider, such as the probability that one of the 3^^^3 simulated people will be able to rescue his people, and the probability that someone else will come and threaten to kill 4^^^^4 people and demand enough money that I can’t afford to part with the $5 for only 3^^^3 people.
Overall, I think the problem is poorly defined for the above reasons, but perhaps my main objection would be the attempt to use a formalism intended to reduce unnecessary complexity, as a guide to whether something is true.
I think the answer to this question concerns the Kolmogorov complexity of various things, and the utility function as well. What is the Kolmogorov complexity of 3^^^3 simulated people? What is the complexity of the program to generate the simulated people? What is the complexity of the threat, that for each of these 3^^^3 people, this particular man is capable of killing each of them? What sort of prior probability do we assign to “this man is capable of simulating 3^^^3 people, killing each of them, and willing to do so for $5”?
Similarly, the utility function for this calculation needs to be defined. Utility is usually calculated with decreasing marginal returns, such that we usually are described as having scope insensitivity. Likewise, we attach disproportionately lower utility to things with small chances. We’d probably also have lower utility for these 3^^^3 simulated people on account of their being simulated, being generated by a program with very low Kolmogorov complexity (ie, lack individuality and “realness”), and existing at the whim of a seemingly cruel and crazy person (meaning they’re probably doomed anyways).
There’s a few other things to consider, such as the probability that one of the 3^^^3 simulated people will be able to rescue his people, and the probability that someone else will come and threaten to kill 4^^^^4 people and demand enough money that I can’t afford to part with the $5 for only 3^^^3 people.
Overall, I think the problem is poorly defined for the above reasons, but perhaps my main objection would be the attempt to use a formalism intended to reduce unnecessary complexity, as a guide to whether something is true.