Meta-theory of rationality

Here I speculate about questions such as:

What makes a theory of rationality useful or useless?

When is a theory of rationality useful for building agents, describing agents, or becoming a better agent, and to what extent should the answers be connected?

How elegant should we expect algorithms for intelligence to be?

What concepts deserve to be promoted to the root/​core design of an AGI versus discovered by AGI? Perhaps relatedly, does human cognition have such a root/​core algorithm, and if so, what is it?

Levels of anal­y­sis for think­ing about agency

Ac­tion the­ory is not policy the­ory is not agent theory

What makes a the­ory of in­tel­li­gence use­ful?

Ex­ist­ing UDTs test the limits of Bayesi­anism (and con­sis­tency)