“How do you propose to reliably put an agent into the described situation?”—Why do we have to be able to reliably put an agent in that situation? Isn’t it enough that an agent may end up in that situation?
But in terms of how the agent can know the predictor is accurate, perhaps the agent gets to examine its source code after it has run and its implemented in hardware rather than software so that the agent knows that it wasn’t modified?
But I don’t know why you’re asking so I don’t know if this answers the relevant difficulty.
“How do you propose to reliably put an agent into the described situation?”—Why do we have to be able to reliably put an agent in that situation? Isn’t it enough that an agent may end up in that situation?
For example, we can describe how to put an agent into the counterfactual mugging scenario as normally described (where Omega asks for $10 and gives nothing in return), but critically for our analysis, one can only reliably do so by creating a significant chance that the agent ends up in the other branch (where Omega gives the agent a large sum if and only if Omega would have received the asked-for $10 in the other branch). If this were not the case, the argument for giving the $10 would seem weaker.
But in terms of how the agent can know the predictor is accurate, perhaps the agent gets to examine its source code after it has run and its implemented in hardware rather than software so that the agent knows that it wasn’t modified?
I’m asking for more detail about how the predictor is constructed such that the predictor can accurately point out that the agent has the same output as the box. Similarly to how counterfactual mugging would be less compelling if we had to rely on the agent happening to have the stated subjunctive dependencies rather than being able to describe a scenario in which it seems very reasonable for the agent to have those subjunctive dependencies, your example would be less compelling if the box just happens to contain a slip of paper with our exact actions, and the predictor just happens to guess this correctly, and we just happen to trust the predictor correctly. Then I would agree that something has gone wrong, but all that has gone wrong is that the agent had a poor picture of the world (one which is subjunctively incorrect from our perspective, even though it made correct predictions).
On the other hand, if the predictor runs a simulation of us, and then purposefully chose a box whose output is identical to ours, then the situation seems perfectly sensible: “the box” that’s correlated with our output subjectively is a box which is chosen differently in cases where our output is different; and, the choice-of-box contains a copy of us. So the example works: there is a copy of us somewhere in the computation which correlates with us.
I’ve read it now. I think you could already have guessed that I agree with the ‘subjective’ point and disagree with the ‘meaningless to consider the case where you have full knowledge’ point.
“”The box” that’s correlated with our output subjectively is a box which is chosen differently in cases where our output is different; and, the choice-of-box contains a copy of us. So the example works”—that’s a good point and if you examine the source code, you’ll know it was choosing between two boxes. Maybe we need an extra layer of indirection. There’s a Truth Tester who can verify that the Predictor is accurate by examining its source code and you only get to examine the Truth Tester’s code, so you never end up seeing the code within the predictor that handles the case where the box doesn’t have the same output as you. As far as you are subjectively concerned, that doesn’t happen.
Ok, so you find yourself in this situation where the Truth Tester has verified that the Predictor is accurate, and you’ve verified that the Truth Tester is accurate, and the Predictor tells you that the direction you’re about to turn your head has a perfect correspondence to the orbit of some particular asteroid. Lacking the orbit information yourself, you now have a subjective link between your next action and the asteroid’s path.
This case does appear to present some difficulty for me.
I think this case isn’t actually so different from the previous case, because although you don’t know the source code of the Predictor, you might reasonably suspect that the Predictor picks out an asteroid after predicting you (or, selects the equation relating your head movement to the asteroid orbit after picking out the asteroid). We might suspect this precisely because it is implausible that the asteroid is actually mirroring our computation in a more significant sense. So using a Truth Teller intermediary increases the uncertainty of the situation, but increased uncertainty is compatible with the same resolution.
What your revision does do, though, is highlight how the counterfactual expectation has to differ from the evidential conditional. We may think “the Predictor would have selected a different asteroid (or different equation) if its computation of our action had turned out different”, but, we now know the asteroid (and the equation); so, our evidential expectation is clearly that the asteroid has a different orbit depending on our choice of action. Yet, it seems like the sensible counterfactual expectation given the situation is … hm.
Actually, now I don’t think it’s quite that the evidential and counterfactual expectation come apart. Since you don’t know what you actually do yet, there’s no reason for you to tie any particular asteroid to any particular action. So, it’s not that in your state of uncertainty choice of action covaries with choice of asteroid (via some particular mapping). Rather, you suspect that there is such a mapping, whatever that means.
In any case, this difficulty was already present without the Truth Teller serving as intermediary: the Predictor’s choice of box is already known, so even though it is sensible to think of the chosen box as what counterfactually varies based on choice of action, on-the-spot what makes sense (evidentially) is to anticipate the same box having different contents.
So, the question is: what’s my naive functionalist position supposed to be? What sense of “varies with” is supposed to necessitate the presence of a copy of me in the (logico-)causal ancestry of an event?
It occurs to me that although I have made clear that I (1) favor naive functionalism and (2) am far from certain of it, I haven’t actually made clear that I further (3) know of no situation where I think the agent has a good picture of the world and where the agent’s picture leads it to conclude that there’s a logical correlation with its action which can’t be accounted for by a logical cause (ie something like a copy of the agent somewhere in the computation of the correlated thing). IE, if there are outright counterexamples to naive functionalism, I think they’re actually tricky to state, and I have at least considered a few cases—your attempted counterexample comes as no surprise to me and I suspect you’ll have to try significantly harder.
My uncertainty is, instead, in the large ambiguity of concepts like “instance of an agent” and “logical cause”.
“How do you propose to reliably put an agent into the described situation?”—Why do we have to be able to reliably put an agent in that situation? Isn’t it enough that an agent may end up in that situation?
But in terms of how the agent can know the predictor is accurate, perhaps the agent gets to examine its source code after it has run and its implemented in hardware rather than software so that the agent knows that it wasn’t modified?
But I don’t know why you’re asking so I don’t know if this answers the relevant difficulty.
(Also, just wanted to check whether you’ve read the formal problem description in Logical Counterfactuals and the Co-operation Game)
For example, we can describe how to put an agent into the counterfactual mugging scenario as normally described (where Omega asks for $10 and gives nothing in return), but critically for our analysis, one can only reliably do so by creating a significant chance that the agent ends up in the other branch (where Omega gives the agent a large sum if and only if Omega would have received the asked-for $10 in the other branch). If this were not the case, the argument for giving the $10 would seem weaker.
I’m asking for more detail about how the predictor is constructed such that the predictor can accurately point out that the agent has the same output as the box. Similarly to how counterfactual mugging would be less compelling if we had to rely on the agent happening to have the stated subjunctive dependencies rather than being able to describe a scenario in which it seems very reasonable for the agent to have those subjunctive dependencies, your example would be less compelling if the box just happens to contain a slip of paper with our exact actions, and the predictor just happens to guess this correctly, and we just happen to trust the predictor correctly. Then I would agree that something has gone wrong, but all that has gone wrong is that the agent had a poor picture of the world (one which is subjunctively incorrect from our perspective, even though it made correct predictions).
On the other hand, if the predictor runs a simulation of us, and then purposefully chose a box whose output is identical to ours, then the situation seems perfectly sensible: “the box” that’s correlated with our output subjectively is a box which is chosen differently in cases where our output is different; and, the choice-of-box contains a copy of us. So the example works: there is a copy of us somewhere in the computation which correlates with us.
I’ve read it now. I think you could already have guessed that I agree with the ‘subjective’ point and disagree with the ‘meaningless to consider the case where you have full knowledge’ point.
“”The box” that’s correlated with our output subjectively is a box which is chosen differently in cases where our output is different; and, the choice-of-box contains a copy of us. So the example works”—that’s a good point and if you examine the source code, you’ll know it was choosing between two boxes. Maybe we need an extra layer of indirection. There’s a Truth Tester who can verify that the Predictor is accurate by examining its source code and you only get to examine the Truth Tester’s code, so you never end up seeing the code within the predictor that handles the case where the box doesn’t have the same output as you. As far as you are subjectively concerned, that doesn’t happen.
Ok, so you find yourself in this situation where the Truth Tester has verified that the Predictor is accurate, and you’ve verified that the Truth Tester is accurate, and the Predictor tells you that the direction you’re about to turn your head has a perfect correspondence to the orbit of some particular asteroid. Lacking the orbit information yourself, you now have a subjective link between your next action and the asteroid’s path.
This case does appear to present some difficulty for me.
I think this case isn’t actually so different from the previous case, because although you don’t know the source code of the Predictor, you might reasonably suspect that the Predictor picks out an asteroid after predicting you (or, selects the equation relating your head movement to the asteroid orbit after picking out the asteroid). We might suspect this precisely because it is implausible that the asteroid is actually mirroring our computation in a more significant sense. So using a Truth Teller intermediary increases the uncertainty of the situation, but increased uncertainty is compatible with the same resolution.
What your revision does do, though, is highlight how the counterfactual expectation has to differ from the evidential conditional. We may think “the Predictor would have selected a different asteroid (or different equation) if its computation of our action had turned out different”, but, we now know the asteroid (and the equation); so, our evidential expectation is clearly that the asteroid has a different orbit depending on our choice of action. Yet, it seems like the sensible counterfactual expectation given the situation is … hm.
Actually, now I don’t think it’s quite that the evidential and counterfactual expectation come apart. Since you don’t know what you actually do yet, there’s no reason for you to tie any particular asteroid to any particular action. So, it’s not that in your state of uncertainty choice of action covaries with choice of asteroid (via some particular mapping). Rather, you suspect that there is such a mapping, whatever that means.
In any case, this difficulty was already present without the Truth Teller serving as intermediary: the Predictor’s choice of box is already known, so even though it is sensible to think of the chosen box as what counterfactually varies based on choice of action, on-the-spot what makes sense (evidentially) is to anticipate the same box having different contents.
So, the question is: what’s my naive functionalist position supposed to be? What sense of “varies with” is supposed to necessitate the presence of a copy of me in the (logico-)causal ancestry of an event?
It occurs to me that although I have made clear that I (1) favor naive functionalism and (2) am far from certain of it, I haven’t actually made clear that I further (3) know of no situation where I think the agent has a good picture of the world and where the agent’s picture leads it to conclude that there’s a logical correlation with its action which can’t be accounted for by a logical cause (ie something like a copy of the agent somewhere in the computation of the correlated thing). IE, if there are outright counterexamples to naive functionalism, I think they’re actually tricky to state, and I have at least considered a few cases—your attempted counterexample comes as no surprise to me and I suspect you’ll have to try significantly harder.
My uncertainty is, instead, in the large ambiguity of concepts like “instance of an agent” and “logical cause”.