Epistemic Rigor
I’m sure this has been discussed elsewhere, including on LessWrong. I haven’t spent much time investigating other thoughts on these specific lines. Links appreciated!
The current model of a classically rational agent assume logical omniscience and precomputed credences over all possible statements.
This is really, really bizarre upon inspection.
First, “logical omniscience” is very difficult, as has been discussed (The Logical Induction paper goes into this).
Second, all possible statements include statements of all complexity classes that we know of (from my understanding of complexity theory). “Credences over all possible statements” would easily include uncountable infinities of credences. One could clarify that even arbitrarily large amounts of computation would not be able to hold all of these credences.
Precomputation for things like this is typically a poor strategy, for this reason. The often-better strategy is to compute things on-demand.
A nicer definition could be something like:
A credence is the result of an [arbitrarily large] amount of computation being performed using a reasonable inference engine.
It should be quite clear that calculating credences based on existing explicit knowledge is a very computationally-intensive activity. The naive Bayesian way would be to start with one piece of knowledge, and then perform a Bayesian update on each next piece of knowledge. The “pieces of knowledge” can be prioritized according to heuristics, but even then, this would be a challenging process.
I think I’d like to see specification of credences that vary with computation or effort. Humans don’t currently have efficient methods to use effort to improve our credences, as a computer or agent would be expected to.
Solomonoff’s theory of Induction or Logical Induction could be relevant for the discussion of how to do this calculation.
More Narrow Models of Credences
Epistemic Rigor
I’m sure this has been discussed elsewhere, including on LessWrong. I haven’t spent much time investigating other thoughts on these specific lines. Links appreciated!
The current model of a classically rational agent assume logical omniscience and precomputed credences over all possible statements.
This is really, really bizarre upon inspection.
First, “logical omniscience” is very difficult, as has been discussed (The Logical Induction paper goes into this).
Second, all possible statements include statements of all complexity classes that we know of (from my understanding of complexity theory). “Credences over all possible statements” would easily include uncountable infinities of credences. One could clarify that even arbitrarily large amounts of computation would not be able to hold all of these credences.
Precomputation for things like this is typically a poor strategy, for this reason. The often-better strategy is to compute things on-demand.
A nicer definition could be something like:
It should be quite clear that calculating credences based on existing explicit knowledge is a very computationally-intensive activity. The naive Bayesian way would be to start with one piece of knowledge, and then perform a Bayesian update on each next piece of knowledge. The “pieces of knowledge” can be prioritized according to heuristics, but even then, this would be a challenging process.
I think I’d like to see specification of credences that vary with computation or effort. Humans don’t currently have efficient methods to use effort to improve our credences, as a computer or agent would be expected to.
Solomonoff’s theory of Induction or Logical Induction could be relevant for the discussion of how to do this calculation.