That’s done or being done. Look into “probabilistic programming”.
This doesn’t look like what I want—have you read my take on logical uncertainty? I still stand by the problem statement (though I’m more uncertain now about minimizing Shannon vs. Kolmogorov information), even if I no longer quite approve of the proposed solution.
EDIT: If I were to point to what I no longer approve of, it would be that while in the linked posts I focus on what’s mandatory for limited reasoners, I now think it should be possible to look at what’s actually a good idea for limited reasoners.
Probabilistic programming seems to assign distributions to programs that involve random variables, but logical uncertainty assigns distributions to programs without random variables :P
This doesn’t look like what I want—have you read my take on logical uncertainty?
A little—yes approximation is key for practical performance. This is why neural nets and related models work so well—they allow one to efficiently search over model/function/program space for the best approximate models that fit memory/compute constraints.
Probabilistic programming seems to assign distributions to programs that involve random variables, but logical uncertainty assigns distributions to programs without random variables :P
Code and variables are interchangeable, so of course prob programming can model distributions over programs. For example, I could create a VM or interpreter with a big array of ‘variables’ that define opcodes. Graphical models and networks also encode programs as variables. There is no hard distinction between data/code/variables.
To give an example of what I mean, if a probabilistic programming language really implemented logical uncertainty, you could write a deterministic program that computes the last digit of the googolth prime number (possible outputs 0 through 9), and then use some getDistribution method on this deterministic program, and it would return the distribution {0.25 chance of 1, 0.25 chance of 3, 0.25 chance of 7, 0.25 chance of 9} (unless your computer actually has enough power to calculate the googolth prime).
I think your example is—at least theoretically—still within the reach of a probabilistic logic program, although the focus is entirely on approximation rather than conditioning (there are no observations). Of course, I doubt any existing inference engine would be able to help much with problems of that type, but the idea is still general enough to cover estimating intractable functions from the logic defining the solution.
This doesn’t look like what I want—have you read my take on logical uncertainty? I still stand by the problem statement (though I’m more uncertain now about minimizing Shannon vs. Kolmogorov information), even if I no longer quite approve of the proposed solution.
EDIT: If I were to point to what I no longer approve of, it would be that while in the linked posts I focus on what’s mandatory for limited reasoners, I now think it should be possible to look at what’s actually a good idea for limited reasoners.
Probabilistic programming seems to assign distributions to programs that involve random variables, but logical uncertainty assigns distributions to programs without random variables :P
A little—yes approximation is key for practical performance. This is why neural nets and related models work so well—they allow one to efficiently search over model/function/program space for the best approximate models that fit memory/compute constraints.
Code and variables are interchangeable, so of course prob programming can model distributions over programs. For example, I could create a VM or interpreter with a big array of ‘variables’ that define opcodes. Graphical models and networks also encode programs as variables. There is no hard distinction between data/code/variables.
Hm, I think we’re talking past each other.
To give an example of what I mean, if a probabilistic programming language really implemented logical uncertainty, you could write a deterministic program that computes the last digit of the googolth prime number (possible outputs 0 through 9), and then use some getDistribution method on this deterministic program, and it would return the distribution {0.25 chance of 1, 0.25 chance of 3, 0.25 chance of 7, 0.25 chance of 9} (unless your computer actually has enough power to calculate the googolth prime).
I think your example is—at least theoretically—still within the reach of a probabilistic logic program, although the focus is entirely on approximation rather than conditioning (there are no observations). Of course, I doubt any existing inference engine would be able to help much with problems of that type, but the idea is still general enough to cover estimating intractable functions from the logic defining the solution.