To me it feels like the natural place to draw the line is update-on-computations but updateless-on-observations. Because 1) It never disincentivizes thinking clearly, so commitment races bottom out in a reasonable way, and 2) it allows cooperation on common-in-the-real-world newcomblike problems.
It doesn’t do well in worlds with a lot of logical counterfactual mugging, but I think I’m okay with this? I can’t see why this situation would be very common, and if it comes up it seems that an agent that updates on computations can use some precommitment mechanism to take advantage of it (e.g. making another agent).
Am I missing something about why logical counterfactual muggings are likely to be common?
Looking through your PIBBS report (which is amazing, very helpful), I intuitively feel the pull of Desiderata 4 (No existential regret), and also the intuition of wanting to treat logical uncertainty and empirical uncertainty in a similar way. But ultimately I’m so horrified by the mess that comes from being updateless-on-logic that being completely updateful on logic is looking pretty good to me.
To me it feels like the natural place to draw the line is update-on-computations but updateless-on-observations.
A first problem with this is that there is no sharp distinction between purely computational (analytic) information/observations and purely empirical (synthetic) information/observations. This is a deep philosophical point, well-known in the analytic philosophy literature, and best represented by Quine’s Two dogmas of empiricism, and his idea of the “Web of Belief”. (This is also related to Radical Probabilisim.) But it’s unclear if this philosophical problem translates to a pragmatic one. So let’s just assume that the laws of physics are such that all superintelligences we care about converge on the same classification of computational vs empirical information.
A second and more worrying problem is that, even given such convergence, it’s not clear all other agents will decide to forego the possible apparent benefits of logical exploitation. It’s a kind of Nash equilibrium selection problem: If I was very sure all other agents forego them (and have robust cooperation mechanisms that deter exploitation), then I would just do like them. And indeed, it’s conceivable that our laws of physics (and algorithmics) are such that this is the case, and all superintelligences converge on the Schelling point of “never exploiting the learning of logical information”. But my probability of that is not very high, especially due to worries that different superintelligences might start with pretty different priors, and make commitments early on (before all posteriors have had time to converge). (That said, my probability is high that almost all deliberation is mostly safe, by more contingent reasons related to the heuristics they use and values they have.) You might also want to say something like “they should just use the correct decision theory to converge on the nicest Nash equilibrium!”. But that’s question-begging, because the worry is exactly that others might have different notions of this normative “nice” (indeed, no objective criterion for decision theory). The problem recurs: we can’t just invoke a decision theory to decide on the correct decision theory.
Am I missing something about why logical counterfactual muggings are likely to be common?
As mentioned in the post, Counterfactual Mugging as presented won’t be common, but equivalent situations in multi-agentic bargaining might, due to (the naive application of) some priors leading to commitment races. (And here “naive” doesn’t mean “shooting yourself in the foot”, but rather “doing what looks best from the prior”, even if unbeknownst to you it has dangerous consequences.)
if it comes up it seems that an agent that updates on computations can use some precommitment mechanism to take advantage of it
It’s not looking like something as simple as that will solve, because of reasoning as in this paragraph:
Unfortunately, it’s not that easy, and the problem recurs at a higher level: your procedure to decide which information to use will depend on all the information, and so you will already lose strategicness. Or, if it doesn’t depend, then you are just being updateless, not using the information in any way.
Or in other words, you need to decide on the precommitment ex ante, when you still haven’t thought much about anything, so your precommitment might be bad. (Although to be fair there are ongoing discussions about this.)
A first problem with this is that there is no sharp distinction between purely computational (analytic) information/observations and purely empirical (synthetic) information/observations.
I don’t see the fuzziness here, even after reading the two dogmas wikipedia page (but not really understanding it, it’s hidden behind a wall of jargon). If we have some prior over universes, and some observation channel, we can define an agent that is updateless with respect to that prior, and updateful with respect to any calculations it performs internally. Is there a section of Radical Probablism that is particularly relevant? It’s been a while. It’s not clear to me why all superintelligences having the same classification matters. They can communicate about edge cases and differences in their reasoning. Do you have an example here?
A second and more worrying problem is that, even given such convergence, it’s not clear all other agents will decide to forego the possible apparent benefits of logical exploitation. It’s a kind of Nash equilibrium selection problem: If I was very sure all other agents forego them (and have robust cooperation mechanisms that deter exploitation), then I would just do like them.
I think I don’t understand why this is a problem. So what if there are some agents running around being updateless about logic? What’s the situation that we are talking about a Nash equilibrium for?
As mentioned in the post, Counterfactual Mugging as presented won’t be common, but equivalent situations in multi-agentic bargaining might, due to (the naive application of) some priors leading to commitment races.
Can you point me to an example in bargaining that motivates the usefulness of logical updatelessness? My impression of that section wasn’t “here is a realistic scenario that motivates the need for some amount of logical updatelessness”, it felt more like “logical bargaining is a situation where logical updatelessness plausibly leads to terrible and unwanted decisions”.
It’s not looking like something as simple as that will solve, because of reasoning as in this paragraph:
Unfortunately, it’s not that easy, and the problem recurs at a higher level: your procedure to decide which information to use will depend on all the information, and so you will already lose strategicness. Or, if it doesn’t depend, then you are just being updateless, not using the information in any way.
Or in other words, you need to decide on the precommitment ex ante, when you still haven’t thought much about anything, so your precommitment might be bad.
Yeah I wasn’t thinking that was a “solution”, I’m biting the bullet of losing some potential value and having a decision theory that doesn’t satisfy all the desiderata. I was just saying that in some situations, such an agent can patch the problem using other mechanisms, just as an EDT agent can try to implement some external commitment mechanism if it lives in a world full of transparent newcomb problems.
To make a bit of a point here, which might clarify the discussion:
A first problem with this is that there is no sharp distinction between purely computational (analytic) information/observations and purely empirical (synthetic) information/observations. This is a deep philosophical point, well-known in the analytic philosophy literature, and best represented by Quine’s Two dogmas of empiricism, and his idea of the “Web of Belief”. (This is also related to Radical Probabilisim.) But it’s unclear if this philosophical problem translates to a pragmatic one. So let’s just assume that the laws of physics are such that all superintelligences we care about converge on the same classification of computational vs empirical information.
I’d say the major distinction between logical/mathematical/computational uncertainty and empirical uncertainty which Quine ignored is that empirical uncertainty consists of the problem of starting from a prior and updating, where the worlds/hypotheses being updated upon are all as self-consistent/real as each other, and thus even with infinite compute, observing empirical evidence actually means we can get new information, since it reduces the number of possible states we can be in.
Meanwhile, logical/mathematical/computational uncertainty is a case where you know a-priori that there is only 1 correct answer, and the reason why you are uncertain is solely due to the boundedness of yourself. If you had infinite compute like a model of computation below, you could in principle compute the correct answer, which applies everywhere. This is why logical uncertainty was so hard, in that since there was only 1 possible answer, it just required computing time, it meant that logical uncertainty screwed with update procedures, and the theoretical solution is logical induction.
Note I haven’t solved the other problems of updating on computations/stuff where there is only 1 correct answer vs being updateless on empirical uncertainty when multiple correct answers are allowed.
To me it feels like the natural place to draw the line is update-on-computations but updateless-on-observations. Because 1) It never disincentivizes thinking clearly, so commitment races bottom out in a reasonable way, and 2) it allows cooperation on common-in-the-real-world newcomblike problems.
It doesn’t do well in worlds with a lot of logical counterfactual mugging, but I think I’m okay with this? I can’t see why this situation would be very common, and if it comes up it seems that an agent that updates on computations can use some precommitment mechanism to take advantage of it (e.g. making another agent).
Am I missing something about why logical counterfactual muggings are likely to be common?
Looking through your PIBBS report (which is amazing, very helpful), I intuitively feel the pull of Desiderata 4 (No existential regret), and also the intuition of wanting to treat logical uncertainty and empirical uncertainty in a similar way. But ultimately I’m so horrified by the mess that comes from being updateless-on-logic that being completely updateful on logic is looking pretty good to me.
(Great post, thanks)
A first problem with this is that there is no sharp distinction between purely computational (analytic) information/observations and purely empirical (synthetic) information/observations. This is a deep philosophical point, well-known in the analytic philosophy literature, and best represented by Quine’s Two dogmas of empiricism, and his idea of the “Web of Belief”. (This is also related to Radical Probabilisim.)
But it’s unclear if this philosophical problem translates to a pragmatic one. So let’s just assume that the laws of physics are such that all superintelligences we care about converge on the same classification of computational vs empirical information.
A second and more worrying problem is that, even given such convergence, it’s not clear all other agents will decide to forego the possible apparent benefits of logical exploitation. It’s a kind of Nash equilibrium selection problem: If I was very sure all other agents forego them (and have robust cooperation mechanisms that deter exploitation), then I would just do like them. And indeed, it’s conceivable that our laws of physics (and algorithmics) are such that this is the case, and all superintelligences converge on the Schelling point of “never exploiting the learning of logical information”. But my probability of that is not very high, especially due to worries that different superintelligences might start with pretty different priors, and make commitments early on (before all posteriors have had time to converge). (That said, my probability is high that almost all deliberation is mostly safe, by more contingent reasons related to the heuristics they use and values they have.)
You might also want to say something like “they should just use the correct decision theory to converge on the nicest Nash equilibrium!”. But that’s question-begging, because the worry is exactly that others might have different notions of this normative “nice” (indeed, no objective criterion for decision theory). The problem recurs: we can’t just invoke a decision theory to decide on the correct decision theory.
As mentioned in the post, Counterfactual Mugging as presented won’t be common, but equivalent situations in multi-agentic bargaining might, due to (the naive application of) some priors leading to commitment races. (And here “naive” doesn’t mean “shooting yourself in the foot”, but rather “doing what looks best from the prior”, even if unbeknownst to you it has dangerous consequences.)
It’s not looking like something as simple as that will solve, because of reasoning as in this paragraph:
Or in other words, you need to decide on the precommitment ex ante, when you still haven’t thought much about anything, so your precommitment might be bad.
(Although to be fair there are ongoing discussions about this.)
I don’t see the fuzziness here, even after reading the two dogmas wikipedia page (but not really understanding it, it’s hidden behind a wall of jargon). If we have some prior over universes, and some observation channel, we can define an agent that is updateless with respect to that prior, and updateful with respect to any calculations it performs internally. Is there a section of Radical Probablism that is particularly relevant? It’s been a while.
It’s not clear to me why all superintelligences having the same classification matters. They can communicate about edge cases and differences in their reasoning. Do you have an example here?
I think I don’t understand why this is a problem. So what if there are some agents running around being updateless about logic? What’s the situation that we are talking about a Nash equilibrium for?
Can you point me to an example in bargaining that motivates the usefulness of logical updatelessness? My impression of that section wasn’t “here is a realistic scenario that motivates the need for some amount of logical updatelessness”, it felt more like “logical bargaining is a situation where logical updatelessness plausibly leads to terrible and unwanted decisions”.
Yeah I wasn’t thinking that was a “solution”, I’m biting the bullet of losing some potential value and having a decision theory that doesn’t satisfy all the desiderata. I was just saying that in some situations, such an agent can patch the problem using other mechanisms, just as an EDT agent can try to implement some external commitment mechanism if it lives in a world full of transparent newcomb problems.
(Sorry, short on time now, but we can discuss in-person and maybe I’ll come back here to write the take-away)
To make a bit of a point here, which might clarify the discussion:
I’d say the major distinction between logical/mathematical/computational uncertainty and empirical uncertainty which Quine ignored is that empirical uncertainty consists of the problem of starting from a prior and updating, where the worlds/hypotheses being updated upon are all as self-consistent/real as each other, and thus even with infinite compute, observing empirical evidence actually means we can get new information, since it reduces the number of possible states we can be in.
Meanwhile, logical/mathematical/computational uncertainty is a case where you know a-priori that there is only 1 correct answer, and the reason why you are uncertain is solely due to the boundedness of yourself. If you had infinite compute like a model of computation below, you could in principle compute the correct answer, which applies everywhere. This is why logical uncertainty was so hard, in that since there was only 1 possible answer, it just required computing time, it meant that logical uncertainty screwed with update procedures, and the theoretical solution is logical induction.
Model of computation:
https://arxiv.org/abs/1806.08747
Logical induction:
https://arxiv.org/abs/1609.03543
Note I haven’t solved the other problems of updating on computations/stuff where there is only 1 correct answer vs being updateless on empirical uncertainty when multiple correct answers are allowed.