I thought your post asked about the proposition “o=ASK ⇒ a=PAY”, and didn’t mention the other one at all. You asked this:
Omega asks you to pay him $100. Do you pay?
not this:
Do you precommit to pay?
So I just don’t use the naked proposition “a=PAY” anywhere. In fact I don’t even understand how to define its truth value for all agents, because it may so happen that the agent gets $1000 and walks away without being asked anything.
If both the agent and Omega are deterministic programs, and the agent is never in fact asked, that fact may be converted into a statement about natural numbers. So what you just said is equivalent to this:
Seems to me that for all agents there is a fact of the matter about whether they would pay if 1 were equal to 2.
Why? Say the world program W includes function f, and it’s provable that W could never call f with argument 1. That doesn’t mean there’s no fact of the matter about what happens when f(1) is computed (though of course it might not halt). (Function f doesn’t have to be called from W.)
Even if f can be regarded as a rational agent who ‘knows’ the source code of W, the worst that could happen is that f ‘deduces’ a contradiction and goes insane. That’s different from the agent itself being in an inconsistent state.
Analogy: We can define the partial derivatives of a Lagrangian with respect to q and q-dot, even though it doesn’t make sense for q and q-dot to vary independently of each other.
I thought your post asked about the proposition “o=ASK ⇒ a=PAY”, and didn’t mention the other one at all. You asked this:
not this:
So I just don’t use the naked proposition “a=PAY” anywhere. In fact I don’t even understand how to define its truth value for all agents, because it may so happen that the agent gets $1000 and walks away without being asked anything.
Seems to me that for all agents there is a fact of the matter about whether they would pay if asked. Even for agents that never in fact are asked.
So I do interpret a=PAY as “would pay”. But maybe there are other legitimate interpretations.
If both the agent and Omega are deterministic programs, and the agent is never in fact asked, that fact may be converted into a statement about natural numbers. So what you just said is equivalent to this:
I don’t know, this looks shady.
Why? Say the world program W includes function f, and it’s provable that W could never call f with argument 1. That doesn’t mean there’s no fact of the matter about what happens when f(1) is computed (though of course it might not halt). (Function f doesn’t have to be called from W.)
Even if f can be regarded as a rational agent who ‘knows’ the source code of W, the worst that could happen is that f ‘deduces’ a contradiction and goes insane. That’s different from the agent itself being in an inconsistent state.
Analogy: We can define the partial derivatives of a Lagrangian with respect to q and q-dot, even though it doesn’t make sense for q and q-dot to vary independently of each other.