Of course, you can describe anything with some probability distribution, but these are cases where the standard Bayesian approach to modelling belief-states needs to be amended somewhat.
1-4 seem to go away if I don’t care about self-knowledge, and have infinite compute. 5 doesn’t seem like a problem to me. If there is a best reasoning system, it should not make mistakes. Showing that a system can’t make mistakes may show you its not what humans use, but it should not be classified as a problem.
Logical/mathematical beliefs — e.g. “Is Fermat’s Last Theorem true?”
Meta-beliefs — e.g. “Do I believe that I will die one day?”
Beliefs about the outcome space itself — e.g. “Am I conflating these two outcomes?”
Indexical beliefs — e.g. “Am I the left clone or the right clone?”
Irrational beliefs — e.g. conjunction fallacy.
e.t.c.
Of course, you can describe anything with some probability distribution, but these are cases where the standard Bayesian approach to modelling belief-states needs to be amended somewhat.
1-4 seem to go away if I don’t care about self-knowledge, and have infinite compute. 5 doesn’t seem like a problem to me. If there is a best reasoning system, it should not make mistakes. Showing that a system can’t make mistakes may show you its not what humans use, but it should not be classified as a problem.