So I’ve been thinking about how to assign probabilities to true/false assignments over claims in the context of a probabilistic argument mapping program.[1] Inevitably I’ve been confronted with the liar’s paradox and a million related headaches. I have some tentative ideas on how I’d address these: basically allowing sentences in a language to access the probabilities of truth assignments but then replacing those probabilities with conditional probabilities to ensure existence and doing some entropy maximization stuff to hopefully get uniqueness. However, before I write this all up I want to check what the current state of probabilistic logic is on Lesswrong. When I search I mostly see stuff like http://intelligence.org/files/DefinabilityTruthDraft.pdf or more recently https://www.lesswrong.com/posts/KbCHcb8yyjAMFAAPJ/when-wishful-thinking-works. Are these kind of texts the current forefront of this topic that I should put my post in conversation with? If not, what is? Thanks!
- ^
I’m trying to follow up on the formalism I described in my first post while incorporating the suggestions in the comments.
Semanticists have been pretty productive https://www.cambridge.org/core/books/foundations-of-probabilistic-programming/819623B1B5B33836476618AC0621F0EE and may help you approach what matters to you, there are certainly adjacent questions and concerns.
A bunch of PL papers building on like the giry monad have floated around in stale tabs on my machine for a couple years. open agency architecture actually provides bread crumbs to this sort of thing in a footnote about infrabayesianism https://www.lesswrong.com/posts/pKSmEkSQJsCSTK6nH/an-open-agency-architecture-for-safe-transformative-ai#fnnlogfgcev2e and a lot of the open games folks pushed on this cuz they wanted it for bayesian Nash equilibria.