Are you convinced yet there is something wrong with this whole business of subjective anticipation?
I’m not sure what this “whole business of … anticipation” has to do with subjective experience.
Suppose that, a la Jaynes, we programmed a robot with the rules of probability, the flexibility of recognizing various predicates about reality, and the means to apply the rules of probability when choosing between courses of action to maximize a utility function. Let’s assume this utility function is implemented as an internal register H which is incremented or decremented according to whether the various predicates are satisfied.
This robot could conceivably be equipped with predicates that allow for the contingency of having copies made of itself, copies which we’ll assume to include complete records of the robot’s internal state up to the moment of copying, including register H.
The question then becomes one of specifying what, precisely, is meant by maximizing the expected value of H, given the possibility of copying.
Suppose we want to know what the robot would decide given a copy-and-torture scenario as suggested by Wei Dai. The question of “what the robot would do” surely does not depend on whether the robot thinks of itself as rational, whether it can be said to have subjective anticipation, whether time consistency is important to it, and so on. These considerations are irrelevant to predicting the robot’s behaviour.
The question of “what the robot would do” depends solely on what it formally means to have the robot maximize the expected value of H, since “the value of H” becomes an ambiguous specification from the moment we allow for copying.
(On the other hand, was that specification ever unambiguous to begin with?)
If the robot is programmed to construe “the value of H” as meaning what we might call the “indexical value” of H, that is, the value-held-by-the-present-copy, then it (or rather its A copy) would presumably act in the torture scenario as Wei Day claims most humans would act, and refuse to press the button. But since the “indexical value of H” is ill defined from the perspective of the pre-copying robot, with respect to the situation after the copy, the robot would err when making this decision prior to copying, and would therefore predictably exhibit what we’d call a time inconsistency.
If the robot is programmed to construe “the value of H” as the sum (or the average) of the indexical values of H for all copies of its state which are descendants of the state which is making the decision, then—regardless of when it makes the choice and regardless of which copy is A or B—it would decide as I have claimed a one-boxer would decide.(Though, working out these implications of the “choice machine” frame, I’m less sure than before of the relation between Wei Dai’s scenario and Newcomb’s problem.)
While writing the above, I realized—this is where I’m driving at with the parenthetical comment about ambiguity—that even in a world without copying you get to make plenty of non-trivial decisions about what it means, formally, to maximize the value of H. In particular, you could be faced with decisions you must make now but which will have an effect in the future and whose effect on H may depend on the value of H at that time. (There are plenty of real-life examples, which I’ll leave as exercise for the reader.) Just how you program the robot to deal with those seems (as far as I can tell) underspecified by the laws of probability alone.
A shorter way of saying all the above is that if we taboo “anticipation”, when predicting what a certain class of agent will do, we don’t necessarily find that there is anything particularly strange about a predicate saying “the present state of the robot is copy N of M”. What we find is that we might want to program the robot differently if we want it to deal in a certain way with the contingency of copying; that is unsurprising. We also find that subjective experience needn’t enter the picture at all.
I’m not sure what this “whole business of … anticipation” has to do with subjective experience.
Suppose that, a la Jaynes, we programmed a robot with the rules of probability, the flexibility of recognizing various predicates about reality, and the means to apply the rules of probability when choosing between courses of action to maximize a utility function. Let’s assume this utility function is implemented as an internal register H which is incremented or decremented according to whether the various predicates are satisfied.
This robot could conceivably be equipped with predicates that allow for the contingency of having copies made of itself, copies which we’ll assume to include complete records of the robot’s internal state up to the moment of copying, including register H.
The question then becomes one of specifying what, precisely, is meant by maximizing the expected value of H, given the possibility of copying.
Suppose we want to know what the robot would decide given a copy-and-torture scenario as suggested by Wei Dai. The question of “what the robot would do” surely does not depend on whether the robot thinks of itself as rational, whether it can be said to have subjective anticipation, whether time consistency is important to it, and so on. These considerations are irrelevant to predicting the robot’s behaviour.
The question of “what the robot would do” depends solely on what it formally means to have the robot maximize the expected value of H, since “the value of H” becomes an ambiguous specification from the moment we allow for copying.
(On the other hand, was that specification ever unambiguous to begin with?)
If the robot is programmed to construe “the value of H” as meaning what we might call the “indexical value” of H, that is, the value-held-by-the-present-copy, then it (or rather its A copy) would presumably act in the torture scenario as Wei Day claims most humans would act, and refuse to press the button. But since the “indexical value of H” is ill defined from the perspective of the pre-copying robot, with respect to the situation after the copy, the robot would err when making this decision prior to copying, and would therefore predictably exhibit what we’d call a time inconsistency.
If the robot is programmed to construe “the value of H” as the sum (or the average) of the indexical values of H for all copies of its state which are descendants of the state which is making the decision, then—regardless of when it makes the choice and regardless of which copy is A or B—it would decide as I have claimed a one-boxer would decide.(Though, working out these implications of the “choice machine” frame, I’m less sure than before of the relation between Wei Dai’s scenario and Newcomb’s problem.)
While writing the above, I realized—this is where I’m driving at with the parenthetical comment about ambiguity—that even in a world without copying you get to make plenty of non-trivial decisions about what it means, formally, to maximize the value of H. In particular, you could be faced with decisions you must make now but which will have an effect in the future and whose effect on H may depend on the value of H at that time. (There are plenty of real-life examples, which I’ll leave as exercise for the reader.) Just how you program the robot to deal with those seems (as far as I can tell) underspecified by the laws of probability alone.
A shorter way of saying all the above is that if we taboo “anticipation”, when predicting what a certain class of agent will do, we don’t necessarily find that there is anything particularly strange about a predicate saying “the present state of the robot is copy N of M”. What we find is that we might want to program the robot differently if we want it to deal in a certain way with the contingency of copying; that is unsurprising. We also find that subjective experience needn’t enter the picture at all.