The fifth fact is a consequence of the previous ones.
Um, no. Again, I think you may have misunderstood that point there. The point is not that all Countesses can inevitably and inescapably be blackmailed. It is just that a Countess designed a particular way can be blackmailed. The notion of a superior epistemic vantage point is not that there is some way for the Baron to always get it, but that if the Baron happens to have it, the Baron wins.
Could the countess plausibly raise herself to a superior epistemic vantage over the baron, and get out from under his thumb? Alas no.
Again, this just wasn’t a conclusion of the workshop. A certain fixed equation occupies a lower epistemic vantage. Nothing was said about being unable to raise yourself up.
Alas no. Once the countess allows herself to use tactics conditional on the baron’s actions, the whole set-up falls apart: the two start modelling the other’s actions based on their own actions which are based on the other’s actions, and so on. The baron can no longer assume that the countess has no influence on his decision, as now she does, so the loop never terminated.
Or the Countess just decides not to pay, unconditional on anything the Baron does. Also, if the Baron ends up in an infinite loop or failing to resolve the way the Baron wants to, that is not really the Countess’s problem.
As I did say at the decision workshop, the resolution that seems most likely is “respond to offers, not to threats”.
I haven’t missunderstood the points—though I have, I fear, over-simplified the presentation for illustrative purposes. The key missing ingredient is that when I wrote that:
The baron must model the countess as seeing his decision as a fixed fact over which she has no influence,
implicit in that was the assumption that the baron was rational, knew his source and the countess’ and would arrive at a decision in finite time—hence he must be correct in his assumption. I nearly wrote it that way, but thought this layout would be more intuitive.
It is just that a Countess designed a particular way can be blackmailed.
Indeed. Those are conditions that allow the countess to be blackmailed.
Could the countess plausibly raise herself to a superior epistemic vantage over the baron, and get out from under his thumb? Alas no.
If the countess is already in an inferior epistemic vantage point, she can’t raise herself deterministically to a higher one—for instance, she cannot stop treating the baron’s actions as a fixed fact, as an entity capable of doing that is not genuinly treating them as fixed already.
The rest of that section was a rather poorly phrased way of saying that two entities cannot be in superior epistemic vantage over each other.
The fifth fact is a consequence of the previous ones.
It seems that by “consequence” you mean “logical consequence”, that is if I, observing this scenario, note that the first 5 conditions hold, I can derive that the 6th condition holds as well.
There is another interpretation though, that you mean a “causal consequence”, that the baron, by having a certain model of the countess, makes that model correct, because the baron is rational and therefor will produce a correct model. What this interpretation tells us is wrong. (Eliezer, were you interpreting it this way when you said Stuart misunderstood your point?)
Yes, I’m eliding Godelian arguments there… Consequences of anyone being rational and believing X have been removed.
Interestingly, in the model I produced down below, both the countess and the baron produce correct models of each other. Furthermore, the countess knows she produces a correct model of the baron (as she runs his source successfuly).
It also happens that the baron can check he has the correct model of the countess, after making his decision, by running her code. Since the countess will stop running his own code as soon as she also knows his outcome, he can know that his model was accurate in finite time.
implicit in that was the assumption that the baron was rational, knew his source and the countess’ and would arrive at a decision in finite time—hence he must be correct in his assumption. I nearly wrote it that way, but thought this layout would be more intuitive.
You say here that the baron is rational and he knows the countess’s. This being the case the only way for the countess to be blackmailed is if she implements a defective decision algorithm. Yet you describe the difference between the two as an ‘inferior epistemic vantage point’. This does not seem like the right label. It seems to me that the advantage is instrumental and not epistemic.
We don’t have a decision theory that reliably responds to offers, not to threats. We do have an algorithm that responds to offers, not to threats. Approximately it goes “when dealing with with rational agents and there is full epistemic awareness thrown all over the place respond to offers, not to threats because that is what works best.” Unfortunately, integrating that into situations with epistemic uncertainty is all sorts of complex and probably beyond me. But that is a general problem that can be expected with any decision theory.
Um, no. Again, I think you may have misunderstood that point there. The point is not that all Countesses can inevitably and inescapably be blackmailed. It is just that a Countess designed a particular way can be blackmailed. The notion of a superior epistemic vantage point is not that there is some way for the Baron to always get it, but that if the Baron happens to have it, the Baron wins.
Again, this just wasn’t a conclusion of the workshop. A certain fixed equation occupies a lower epistemic vantage. Nothing was said about being unable to raise yourself up.
Or the Countess just decides not to pay, unconditional on anything the Baron does. Also, if the Baron ends up in an infinite loop or failing to resolve the way the Baron wants to, that is not really the Countess’s problem.
As I did say at the decision workshop, the resolution that seems most likely is “respond to offers, not to threats”.
I haven’t missunderstood the points—though I have, I fear, over-simplified the presentation for illustrative purposes. The key missing ingredient is that when I wrote that:
implicit in that was the assumption that the baron was rational, knew his source and the countess’ and would arrive at a decision in finite time—hence he must be correct in his assumption. I nearly wrote it that way, but thought this layout would be more intuitive.
Indeed. Those are conditions that allow the countess to be blackmailed.
If the countess is already in an inferior epistemic vantage point, she can’t raise herself deterministically to a higher one—for instance, she cannot stop treating the baron’s actions as a fixed fact, as an entity capable of doing that is not genuinly treating them as fixed already.
The rest of that section was a rather poorly phrased way of saying that two entities cannot be in superior epistemic vantage over each other.
It seems that by “consequence” you mean “logical consequence”, that is if I, observing this scenario, note that the first 5 conditions hold, I can derive that the 6th condition holds as well.
There is another interpretation though, that you mean a “causal consequence”, that the baron, by having a certain model of the countess, makes that model correct, because the baron is rational and therefor will produce a correct model. What this interpretation tells us is wrong. (Eliezer, were you interpreting it this way when you said Stuart misunderstood your point?)
Yes, I’m eliding Godelian arguments there… Consequences of anyone being rational and believing X have been removed.
Interestingly, in the model I produced down below, both the countess and the baron produce correct models of each other. Furthermore, the countess knows she produces a correct model of the baron (as she runs his source successfuly).
It also happens that the baron can check he has the correct model of the countess, after making his decision, by running her code. Since the countess will stop running his own code as soon as she also knows his outcome, he can know that his model was accurate in finite time.
You say here that the baron is rational and he knows the countess’s. This being the case the only way for the countess to be blackmailed is if she implements a defective decision algorithm. Yet you describe the difference between the two as an ‘inferior epistemic vantage point’. This does not seem like the right label. It seems to me that the advantage is instrumental and not epistemic.
We do not yet have a decision algorithm that reliably “respond to offers, not to threats”.
Therefore ‘defective decision algorithm’ must include everything we are capable of designing today :-)
We don’t have a decision theory that reliably responds to offers, not to threats. We do have an algorithm that responds to offers, not to threats. Approximately it goes “when dealing with with rational agents and there is full epistemic awareness thrown all over the place respond to offers, not to threats because that is what works best.” Unfortunately, integrating that into situations with epistemic uncertainty is all sorts of complex and probably beyond me. But that is a general problem that can be expected with any decision theory.
Sorry, trapped by Godel again. Consequences of anyone being rational and believing X have been removed.