If I understand you correctly, your scenario is different from the one I had in mind in that I’d have both computers instantiated at the same time (I’ve clarified that in the post), and then considering the relative probability of experiencing what the 1 kg computer experiences vs experiencing what the 2 kg computer experiences. It seems like one could adapt your scenario by creating a 1 kg and a 2 kg computer at the same time, offering both of them a choice between A and B, and then generating 5 paperclips if the 1 kg computer chooses A and (additionally) 4 paperclips if the 2 kg computer chooses B. Then, the right choice for both systems (who still can’t distinguish themselves from each other) would still be A, but I don’t see how this is related to the relative weight of both maximizer’s experiences—after all, how much value to give each of the computer’s votes is decided by the operators of the experiment, not the computers. To the contrary, if the maximizer cares about the experienced number of paperclips, and each of the maximizers only learns about the paperclips generated by it’s own choice regarding the given options, I’d still say that the maximizer should choose B.
To the contrary, if the maximizer cares about the experienced number of paperclips, and each of the maximizers only learns about the paperclips generated by it’s own choice regarding the given options
Right, that’s why I split them up into different worlds, so that they don’t get any utility from paperclips created by the other paperclip maximizer.
how much value to give each of the computer’s votes is decided by the operators of the experiment, not the computers
I still think that the scenario you describe is not obviously and according to all philosophical intuitions the same as one where both minds exist in parallel.
Also, the expected number of paperclips (what you describe) is not equal to the expected experienced number of paperclips (what would be the relevant weighting for my post). After all, if A involves killing the maximizer before generating any paperclip, the paperclip-maximizer would choose A, while the experienced-paperclip-maximizer would choose B. The probability of experiencing paperclips would be obviously different from the probability of paperclips existing, when choosing A.
Also, the expected number of paperclips (what you describe) is not equal to the expected experienced number of paperclips (what would be the relevant weighting for my post).
If you make robots that maximize your proposed “subjective experience” (proportional to mass) and I make robots that maximize some totally different “subjective experience” (how about proportional to mass squared!), all of those robots will act exactly like one would expect—the linear-experience maximizers would maximize linear-experience, the squared-experience maximizers would maximize squared-experience.
Because anything can be putt into a utility function, it’s very hard to talk about subjective experience by referencing utility functions. We want to reduce “subjective experience” to some kind of behavior that we don’t have to put into the utility function by hand.
In the Sleeping Beauty problem, we can start with an agent that selfishly values some payoff (say, candy bars), with no specific weighting on the number of copies, and no explicit terms for “subjective experience.” But then we put it in an unusual situation, and it turns out that the optimum betting strategy is the one where it gives more weight to world where there are more copies of it. That kind o behavior is what indicate to me that there’s something going on with subjective experience.
If I understand you correctly, your scenario is different from the one I had in mind in that I’d have both computers instantiated at the same time (I’ve clarified that in the post), and then considering the relative probability of experiencing what the 1 kg computer experiences vs experiencing what the 2 kg computer experiences. It seems like one could adapt your scenario by creating a 1 kg and a 2 kg computer at the same time, offering both of them a choice between A and B, and then generating 5 paperclips if the 1 kg computer chooses A and (additionally) 4 paperclips if the 2 kg computer chooses B. Then, the right choice for both systems (who still can’t distinguish themselves from each other) would still be A, but I don’t see how this is related to the relative weight of both maximizer’s experiences—after all, how much value to give each of the computer’s votes is decided by the operators of the experiment, not the computers. To the contrary, if the maximizer cares about the experienced number of paperclips, and each of the maximizers only learns about the paperclips generated by it’s own choice regarding the given options, I’d still say that the maximizer should choose B.
Right, that’s why I split them up into different worlds, so that they don’t get any utility from paperclips created by the other paperclip maximizer.
Not true—see the Sleeping Beauty problem.
I still think that the scenario you describe is not obviously and according to all philosophical intuitions the same as one where both minds exist in parallel.
Also, the expected number of paperclips (what you describe) is not equal to the expected experienced number of paperclips (what would be the relevant weighting for my post). After all, if A involves killing the maximizer before generating any paperclip, the paperclip-maximizer would choose A, while the experienced-paperclip-maximizer would choose B. The probability of experiencing paperclips would be obviously different from the probability of paperclips existing, when choosing A.
If you make robots that maximize your proposed “subjective experience” (proportional to mass) and I make robots that maximize some totally different “subjective experience” (how about proportional to mass squared!), all of those robots will act exactly like one would expect—the linear-experience maximizers would maximize linear-experience, the squared-experience maximizers would maximize squared-experience.
Because anything can be putt into a utility function, it’s very hard to talk about subjective experience by referencing utility functions. We want to reduce “subjective experience” to some kind of behavior that we don’t have to put into the utility function by hand.
In the Sleeping Beauty problem, we can start with an agent that selfishly values some payoff (say, candy bars), with no specific weighting on the number of copies, and no explicit terms for “subjective experience.” But then we put it in an unusual situation, and it turns out that the optimum betting strategy is the one where it gives more weight to world where there are more copies of it. That kind o behavior is what indicate to me that there’s something going on with subjective experience.