Deutsch poses the dilemma as though to do this would necessarily be either evil or trivial—he omits the possibility that such an action might create a net positive utility for humans. That seems worth pointing out as a potential source of bias.
Anyhow, I suppose we have to take into account what is in the computer’s memory in the first place. If it is running a sentient simulation(s) of a human being tortured, then it is good to randomise it. If it is running a sentient simulation of a happy human, then it would be evil to randomise it. If the memory is already random with equal measure of 0 and 1, then it is neutral (ignoring the cost of performing the randomisation itself).
Given these results, it seems that the implicit assumption in this dilemma is that the computer’s memory is initially set to all 0 or all 1. It could be that the memory was already random with a different proportion of 0 and 1, but as far as I can see that has no substantive bearing on the fundamental problem (are Boltzmann brain simulations more likely with 55⁄45 ratio of 1s and 0s rather than 50/50?) – what the dilemma is really probing is whether it is good to increase the probability of (simulated) Boltzmann brains coming into existence.
I would answer: humans primarily attach value to the experiences of qualia-rich – or sentient as some would have it – beings (including themselves). We understand very little of the “rules of qualia” – what is the maximum pain and maximum pleasure, how do they come about, how do they relate to everything else in the universe – but we have to form our best judgement given what we little we know.
Based on introspection and on what others have said about their qualia (or subjective experiences), negative-valued qualia (i.e. various kinds of physical and emotional pain) generally feel more intense than positive-valued qualia, and negative qualia appear to be more motivating to humans than positive qualia. Humans experience life-long trauma after intense negative qualia and memories can be very painful; positive qualia are comparatively feeble. In the immortal words of Rush, “They shout about love but when push comes to shove, They live for the things they’re afraid of”.
Therefore I would assign a somewhat greater moral weight to the negative qualia experienced by (simulated) Boltzmann brains, assuming that negative and positive qualia are otherwise equally likely to be generated in random computations (I don’t see any basis for assuming otherwise).
Of course this is a simplified view of human values; most of us don’t consider orgasmium, a pleasure centre containing an extremely large integer, to be the most desirable form of being to bring into existence. But if we try to include that moral complexity in our decision, this would seem to reduce the expected utility of randomising the computer memory still further. The vast majority of sentient Boltzmann brains, regardless of whether (if at all) they are experiencing what we would recognise as “pleasure” or “pain”, are chaotic beings with chaotic experiences. The vast majority are also primitive – just complex enough to possess qualia. Anything that we would consider to be more or less a simulated human is likely to be torn apart in a fraction of a second by its hostile environment, and (setting aside pleasure/pain) if miraculously he is able to persist he will almost certainly live an aesthetically displeasing life from our perspective.
I conclude that, ceteris paribus, the computer memory should not be randomized.
I agree with those people who have pointed out that of course in reality, the conditions of bounded rationality and the fact that there are tangible expected costs and benefits of randomizing a computer memory (e.g. time wasted, electricity used) render this problem insignificant – although then again the unresolved problem of tiny probabilities of vast utilities is not to be taken lightly. Nonetheless I would point out that this is a thought experiment, intended to illuminate a point of philosophical interest rather than necessarily pose a problem of direct practical importance.
Deutsch poses the dilemma as though to do this would necessarily be either evil or trivial—he omits the possibility that such an action might create a net positive utility for humans. That seems worth pointing out as a potential source of bias.
Anyhow, I suppose we have to take into account what is in the computer’s memory in the first place. If it is running a sentient simulation(s) of a human being tortured, then it is good to randomise it. If it is running a sentient simulation of a happy human, then it would be evil to randomise it. If the memory is already random with equal measure of 0 and 1, then it is neutral (ignoring the cost of performing the randomisation itself).
Given these results, it seems that the implicit assumption in this dilemma is that the computer’s memory is initially set to all 0 or all 1. It could be that the memory was already random with a different proportion of 0 and 1, but as far as I can see that has no substantive bearing on the fundamental problem (are Boltzmann brain simulations more likely with 55⁄45 ratio of 1s and 0s rather than 50/50?) – what the dilemma is really probing is whether it is good to increase the probability of (simulated) Boltzmann brains coming into existence.
I would answer: humans primarily attach value to the experiences of qualia-rich – or sentient as some would have it – beings (including themselves). We understand very little of the “rules of qualia” – what is the maximum pain and maximum pleasure, how do they come about, how do they relate to everything else in the universe – but we have to form our best judgement given what we little we know.
Based on introspection and on what others have said about their qualia (or subjective experiences), negative-valued qualia (i.e. various kinds of physical and emotional pain) generally feel more intense than positive-valued qualia, and negative qualia appear to be more motivating to humans than positive qualia. Humans experience life-long trauma after intense negative qualia and memories can be very painful; positive qualia are comparatively feeble. In the immortal words of Rush, “They shout about love but when push comes to shove, They live for the things they’re afraid of”.
Therefore I would assign a somewhat greater moral weight to the negative qualia experienced by (simulated) Boltzmann brains, assuming that negative and positive qualia are otherwise equally likely to be generated in random computations (I don’t see any basis for assuming otherwise).
Of course this is a simplified view of human values; most of us don’t consider orgasmium, a pleasure centre containing an extremely large integer, to be the most desirable form of being to bring into existence. But if we try to include that moral complexity in our decision, this would seem to reduce the expected utility of randomising the computer memory still further. The vast majority of sentient Boltzmann brains, regardless of whether (if at all) they are experiencing what we would recognise as “pleasure” or “pain”, are chaotic beings with chaotic experiences. The vast majority are also primitive – just complex enough to possess qualia. Anything that we would consider to be more or less a simulated human is likely to be torn apart in a fraction of a second by its hostile environment, and (setting aside pleasure/pain) if miraculously he is able to persist he will almost certainly live an aesthetically displeasing life from our perspective.
I conclude that, ceteris paribus, the computer memory should not be randomized.
I agree with those people who have pointed out that of course in reality, the conditions of bounded rationality and the fact that there are tangible expected costs and benefits of randomizing a computer memory (e.g. time wasted, electricity used) render this problem insignificant – although then again the unresolved problem of tiny probabilities of vast utilities is not to be taken lightly. Nonetheless I would point out that this is a thought experiment, intended to illuminate a point of philosophical interest rather than necessarily pose a problem of direct practical importance.