Self-Referential Probabilistic Logic Admits the Payor’s Lemma
In summary: A probabilistic version of the Payor’s Lemma holds under the logic proposed in the Definability of Truth in Probabilistic Logic. This gives us modal fixed-point-esque group cooperation even under probabilistic guarantees.
EDIT 10/24: I think the way the way this post is framed is somewhat confused and should not be taken literally. However, I do stand by some of the core intuitions here.
Background
Payor’s Lemma: If then
We assume two rules of inference:
Necessitation:
Distributivity:
Proof:
by tautology;
by 1 via necessitation and distributivity;
, by assumption;
from 2 and 3 by modus ponens;
from 4 by necessitation;
from 5 and 3 by modus ponens.
The Payor’s Lemma is provable in all normal modal logics (as it can be proved in the weakest, because it only uses necessitation and distributivity). Its proof sidesteps the assertion of an arbitrary modal fixedpoint, does not require internal necessitation (), and provides the groundwork for Lobian handshake-based cooperation without Lob’s theorem.
It is known that Lob’s theorem fails to hold in reflective theories of logical uncertainty. However, a proof of a probabilistic Payor’s lemma has been proposed, which modifies the rules of inference necessary to be:
Necessitation:
Weak Distributivity: where here we take to be an operator which returns True if the internal credence of is greater than and False if not. (Formalisms incoming).
The question is then: does there exist a consistent formalism under which these rules of inference hold? The answer is yes, and it is provided by Christiano 2012.
Setup
(Regurgitation and rewording of the relevant parts of the Definability of Truth)
Let be some language and be a theory over that language. Assume that is powerful enough to admit a Godel encoding and that it contains terms which correspond to the rational numbers Let be some fixed enumeration of all sentences in Let represent the Godel encoding of
We are interested in the existence and behavior of a function which assigns a real-valued probability in to each well-formed sentence of We are guaranteed the coherency of with the following assumptions:
For all we have that
For each tautology we have
For each contradiction we have
Note: I think that 2 & 3 are redundant (as says John Baez), and that these axioms do not necessarily constrain to in and of themselves (hence the extra restriction). However, neither concern is relevant to the result.
A coherent corresponds to a distribution over models of A coherent which gives probability 1 to corresponds to a distribution over models of . We denote a function which generates a distribution over models of a given theory as
Syntactic-Probabilistic Correspondence: Observe that This allows us to interchange the notions of syntactic consequence and probabilistic certainty.
Now, we want to give sane probabilities to sentences which talk about the probability gives them. This means that we need some way of giving the ability to talk about itself.
Consider the formula takes as input the Godel encodings of sentences. contains arbitrarily precise information about In other words, if then the statement is True for all and the statement is False for all is fundamentally a part of the system, as opposed to being some metalanguage concept.
(These are identical properties to that represented in Christiano 2012 by I simply choose to represent with as it (1) reduces notational uncertainty and (2) seems to be more in the spirit of Godel’s for provability logic).
Let denote the language created by affixing to Then, there exists a coherent for a given consistent theory over such that the following reflection principle is satisfied: In other words, implies
Proof
(From now, for simplicity, we use to refer to and to refer to You can think of this as fixing some theory and operating within it).
Let represent the sentence for some We abbreviate as Then, we have the following:
Probabilistic Payor’s Lemma: If then
Proof as per Demski:
by tautology;
by 1 via weak distributivity,
, by assumption;
from 2 and 3 by modus ponens;
from 4 by necessitation;
from 5 and 3 by modus ponens.
Rules of Inference
Necessitation: If then by syntactic-probabilistic correspondence, so by the reflection principle we have and as such by syntactic-probabilistic correspondence.
Weak Distributivity: The proof of this is slightly more involved.
From we have (via correspondence) that so We want to prove that from this, or We can do casework on . If then weak distributivity follows from vacuousness. If then as so and therefore Then, is True by reflection, so by correspondence it follows that
(I’m pretty sure this modal logic, following necessitation and weak distributivity, is not normal (it’s weaker than ). This may have some implications? But in the ‘agent’ context I don’t think that restricting ourselves to modal logics makes sense).
Bots
Consider agents which return True to signify cooperation in a multi-agent Prisoner’s Dilemma and False to signify defection. (Similar setup to Critch’s ). Each agent has ‘beliefs’ representing their credences over all formal statements in their respective languages (we are assuming they share the same language: this is unnecessary).
Each agent has the ability to reason about their own ‘beliefs’ about the world arbitrarily precisely, and this allows them full knowledge of their utility function (if they are VNM agents, and up to the complexity of the world-states they can internally represent). Then, these agents can be modeled with Christiano’s probabilistic logic! And I would argue it is natural to do so (you could easily imagine an agent having access to its own beliefs with arbitrary precision by, say, repeatedly querying its own preferences).
Then, if each behave in the following manner:
where and they will cooperate by the probabilistic Payor’s lemma.
Proof:
via conjunction;
as if the -threshold is satisfied all others are as well;
by probabilistic Payor.
This can be extended to arbitrarily many agents. Moreso, the valuable insight here is that cooperation is achieved when the evidence that the group cooperates exceeds each and every member’s individual threshold for cooperation. A formalism of the intuitive strategy ‘I will only cooperate if there are no defectors’ (or perhaps ‘we will only cooperate if there are no defectors’).
It is important to note that any is going to be uncomputable. However, I think modeling agents as having arbitrary access to their beliefs is in line with existing ‘ideal’ models (think VNM—I suspect that this formalism closely maps to VNM agents that have access to arbitrary information about their utility function, at least in the form of preferences), and these agents play well with modal fixedpoint cooperation.
Acknowledgements
This work was done while I was a 2023 Summer Research Fellow at the Center on Long-Term Risk. Many thanks to Abram Demski, my mentor who got me started on this project, as well as Sam Eisenstat for some helpful conversations. CLR was a great place to work! Would highly recommend if you’re interested in s-risk reduction.
Is there a reason why this differs from the standard presentation of K? Normally you would say that K is generated by the following (coupled with substitution):
Axioms:
- All tautologies of propositional logic.
- Distribution: □(x→y)→(□x→□y).
Rules of inference:
- Necessitation: ⟨x,□x⟩.
- Modus ponens: ⟨x→y,x,y⟩.
No particular reason (this is the setup used by Demski in his original probabilistic Payor post).
I agree this is nonstandard though! To consider necessitation as a rule of inference & not mentioning modus ponens. Part of the justification is that probabilistic weak distributivity (⊢x→y⟹⊢□px→□py) seems to be much closer to a ‘rule of inference’ than an axiom for me (or, at least, given the probabilistic logic setup we’re using it’s already a tautology?).
On reflection, this presentation makes more sense to me (or at least gives me a better sense of what’s going on / what’s different between □p logic and □ logic). I am pretty sure they’re interchangeable however.
Thanks.
Do you have a reference for this? Or perhaps there is a quick proof that could convince me?
Payor’s Lemma holds in provability logic, distributivity is invoked when moving from step 1) to step 2) and this can be accomplished by considering all instances of distributivity to be true by axiom & using modus ponens. This section should probably be rewritten with the standard presentation of K to avoid confusion.
W.r.t. to this presentation of probabilistic logic, let’s see what the analogous generator would be:
Axioms:
all tautologies of Christiano’s logic
all instances of (x→y)→(□px→□py) (weak distributivity) --- which hold for the reasons in the post
Rules of inference:
Necessitation ⟨x,□px⟩
Modus Ponens ⟨x→y,x,y⟩
Then, again, step 1 to 2 of the proof of the probabilistic payor’s lemma is shown by considering the axiom of weak distributivity and using modus ponens.
(actually, these are pretty rough thoughts. Unsure what the mapping is to the probabilistic version, and if the axiom schema holds in the same way)
You should include the highlighted insight in your summary. Also, why does your setup not lead to inconsistencies when Abram Demski isn’t sure his setup does? Is it just that you don’t have ”⊢a→b, then ⊢p(┌a┐)≤p(┌b┐) ”?
We know that the self-referential probabilistic logic proposed in Christiano 2012 is consistent. So, if we can get probabilistic Payor in this logic, then as we are already operating within a consistent system this should be a legitimate result.
Will respond more in depth later!
Yudhister’s treatment here does not satisfy me: the assumption he calls syntactic-probabilistic correspondence is false. For example, in Paul’s probability distributions, the self-referential sentence L: P(L)<1 must be assigned probability 1, but is not true and not provable.
Could you perhaps say something about what a Kripkean semantics would look like for your logic?
I’m interested in what happens if individual agents A, B, C merely have a probability of cooperating given that their threshold is satisfied. So, consider the following assumptions.
⊢□a(□e(□wE)→(□wE))→□xA
⊢□b(□e(□wE)→(□wE))→□yB
⊢□c(□e(□wE)→(□wE))→□zC
⊢□xA∧□yB∧□zC→□wE
The last assumption being simply that w is low enough. Given these assumptions, we have ⊢□wE via the same proof as in the post.
So for example if x,y,z are all greater than two thirds, there can be some nonzero w such that the agents will cooperate with probability w. In a sense this is not a great outcome, since viable w might be quite small; but it’s surprising to get any cooperation in this circumstance.
The interesting thing about this—beyond showing that going probabilistic allows the handshake to work with somewhat unreliable bots—is that proving ⊢□wE rather than ⊢E is a lot different. With ⊢E, we’re like “And so Peano arithmetic (or whatever) proves they cooperate! We think Peano arithmetic is accurate about such matters, so, they actually cooperate.”
With the conclusion ⊢□wE we’re more like “So if the agent’s probability estimates are any good, we should also expect them to cooperate” or something like that. The connection to them actually cooperating is looser.
It should be P(x∧y)>p.
Yep! Fixed.
Should the ∨ in assumption 1 be an ∧?
Yep! Thanks!