I think it would be handled correctly by a human-level reasoner as a special case of decision-making under logical uncertainty.
I’d also be keen on him specifying more precisely what is meant by the below and why he thinks it to be key:
The key difficulty is that it is impossible for an agent to formally “trust” its own reasoning, i.e. to believe that “anything that I believe is true.”
I did not understand what Paul means by:
I’d also be keen on him specifying more precisely what is meant by the below and why he thinks it to be key: