A lot of effort in AI went into combining advantages of logic and probability theory for representing things. Languages that admit uncertainty and are strictly more powerful than propositional logic are practically a cottage industry now. There is Brian Milch’s BLOG, Pedro Domingo’s Markov logic networks, etc. etc.
A lot of effort in AI went into combining advantages of logic and probability theory for representing things. Languages that admit uncertainty and are strictly more powerful than propositional logic are practically a cottage industry now. There is Brian Milch’s BLOG, Pedro Domingo’s Markov logic networks, etc. etc.
They culminate in the present-day probabilistic programming field, which is the subject studied by the lab I’m about to go visit in a few short hours. They are exactly the approach to this problem which I think makes sense: treat the search for a program as a search for a proof of a proposition, then make programs represent distributions over proofs rather than single proofs, then probabilize everything and make various forms of statistical inference correspond to updating the distributions over proofs, culminating in statistically learned, logically rich knowledge about arbitrary constructions. Curry-Howard + probability = fully general probabilistic models.
So, why does anyone still consider “logical probability” an actual problem, given that these all exist? I am frustratingly ready to raise my belief in the sentence, “Academia solved what LW (and much of the rest of the AI community) still believes are open problems decades ago, but in such thick language that nobody quite realized it.”
I mean, Hutter published a 52-page paper on probability values for sentences in first-order logic just last year, and I generally consider him professional-level competent.
Hi Eli,
A lot of effort in AI went into combining advantages of logic and probability theory for representing things. Languages that admit uncertainty and are strictly more powerful than propositional logic are practically a cottage industry now. There is Brian Milch’s BLOG, Pedro Domingo’s Markov logic networks, etc. etc.
Have you read Joe Halpern’s paper on semantics:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.5699
They culminate in the present-day probabilistic programming field, which is the subject studied by the lab I’m about to go visit in a few short hours. They are exactly the approach to this problem which I think makes sense: treat the search for a program as a search for a proof of a proposition, then make programs represent distributions over proofs rather than single proofs, then probabilize everything and make various forms of statistical inference correspond to updating the distributions over proofs, culminating in statistically learned, logically rich knowledge about arbitrary constructions. Curry-Howard + probability = fully general probabilistic models.
So, why does anyone still consider “logical probability” an actual problem, given that these all exist? I am frustratingly ready to raise my belief in the sentence, “Academia solved what LW (and much of the rest of the AI community) still believes are open problems decades ago, but in such thick language that nobody quite realized it.”
I mean, Hutter published a 52-page paper on probability values for sentences in first-order logic just last year, and I generally consider him professional-level competent.
Not yet. I’m looking it over now.