I still think that the scenario you describe is not obviously and according to all philosophical intuitions the same as one where both minds exist in parallel.
Also, the expected number of paperclips (what you describe) is not equal to the expected experienced number of paperclips (what would be the relevant weighting for my post). After all, if A involves killing the maximizer before generating any paperclip, the paperclip-maximizer would choose A, while the experienced-paperclip-maximizer would choose B. The probability of experiencing paperclips would be obviously different from the probability of paperclips existing, when choosing A.
Also, the expected number of paperclips (what you describe) is not equal to the expected experienced number of paperclips (what would be the relevant weighting for my post).
If you make robots that maximize your proposed “subjective experience” (proportional to mass) and I make robots that maximize some totally different “subjective experience” (how about proportional to mass squared!), all of those robots will act exactly like one would expect—the linear-experience maximizers would maximize linear-experience, the squared-experience maximizers would maximize squared-experience.
Because anything can be putt into a utility function, it’s very hard to talk about subjective experience by referencing utility functions. We want to reduce “subjective experience” to some kind of behavior that we don’t have to put into the utility function by hand.
In the Sleeping Beauty problem, we can start with an agent that selfishly values some payoff (say, candy bars), with no specific weighting on the number of copies, and no explicit terms for “subjective experience.” But then we put it in an unusual situation, and it turns out that the optimum betting strategy is the one where it gives more weight to world where there are more copies of it. That kind o behavior is what indicate to me that there’s something going on with subjective experience.
I still think that the scenario you describe is not obviously and according to all philosophical intuitions the same as one where both minds exist in parallel.
Also, the expected number of paperclips (what you describe) is not equal to the expected experienced number of paperclips (what would be the relevant weighting for my post). After all, if A involves killing the maximizer before generating any paperclip, the paperclip-maximizer would choose A, while the experienced-paperclip-maximizer would choose B. The probability of experiencing paperclips would be obviously different from the probability of paperclips existing, when choosing A.
If you make robots that maximize your proposed “subjective experience” (proportional to mass) and I make robots that maximize some totally different “subjective experience” (how about proportional to mass squared!), all of those robots will act exactly like one would expect—the linear-experience maximizers would maximize linear-experience, the squared-experience maximizers would maximize squared-experience.
Because anything can be putt into a utility function, it’s very hard to talk about subjective experience by referencing utility functions. We want to reduce “subjective experience” to some kind of behavior that we don’t have to put into the utility function by hand.
In the Sleeping Beauty problem, we can start with an agent that selfishly values some payoff (say, candy bars), with no specific weighting on the number of copies, and no explicit terms for “subjective experience.” But then we put it in an unusual situation, and it turns out that the optimum betting strategy is the one where it gives more weight to world where there are more copies of it. That kind o behavior is what indicate to me that there’s something going on with subjective experience.