Not familiar with this particular analogy (of proofs <-> agents)
Do you know of a good reference for how to interpret discussions like this?
For example: ”A tries to prove that A→B, and B tries to prove A”—If A and B are propositions, what does it mean for a proposition to try and prove another proposition?
(There might be more language that needs interpreting, but I got stuck there.)
Perhaps the confusion is mostly me being idiosyncratic! I don’t have a good reference, but can attempt an explanation.
The propositions A and B are meant to model the behaviour of some agents, say Alice and Bob. The proposition A means “Alice cooperates”, likewise B means “Bob cooperates”.
I’m probably often switching viewpoints, talking about A is if it’s Alice, when formally A is some statement we’re using to model Alice’s behaviour.
When I say ”A tries to prove that A→B”, what I really mean is: “In this scenario, Alice is looking for a proof that if she cooperates, then Bob cooperates. We model this with A meaning ‘Alice cooperates’, and A follows from □(A→B).”
Note that every time we use □X we’re talking about proofs of X of any size. This makes our model less realistic, since Alice and Bob only have a limited amount of time in which to reason about each other and try to prove things. The next step would be to relax the assumptions to things like A←□kB, which says “Alice cooperates whenever it can be proven in k steps that Bob cooperates”.
For those who are:
Mathematically literate, but
Not familiar with this particular analogy (of proofs <-> agents)
Do you know of a good reference for how to interpret discussions like this?
For example: ”A tries to prove that A→B, and B tries to prove A”—If A and B are propositions, what does it mean for a proposition to try and prove another proposition?
(There might be more language that needs interpreting, but I got stuck there.)
Perhaps the confusion is mostly me being idiosyncratic! I don’t have a good reference, but can attempt an explanation.
The propositions A and B are meant to model the behaviour of some agents, say Alice and Bob. The proposition A means “Alice cooperates”, likewise B means “Bob cooperates”.
I’m probably often switching viewpoints, talking about A is if it’s Alice, when formally A is some statement we’re using to model Alice’s behaviour.
When I say ”A tries to prove that A→B”, what I really mean is: “In this scenario, Alice is looking for a proof that if she cooperates, then Bob cooperates. We model this with A meaning ‘Alice cooperates’, and A follows from □(A→B).”
Note that every time we use □X we’re talking about proofs of X of any size. This makes our model less realistic, since Alice and Bob only have a limited amount of time in which to reason about each other and try to prove things. The next step would be to relax the assumptions to things like A←□kB, which says “Alice cooperates whenever it can be proven in k steps that Bob cooperates”.