This post felt like it took a problem that I was thinking about from 3 different perspectives and combined them in a way that felt pretty coherent, though I am fully sure how right it gets it. Concretely, the 3 domains I felt it touched on were:
How much can you model human minds as consistent of subagents?
How much can problems with coherence theorems be addressed by modeling things as subagents?
How much will AI systems behave like consisting of multiple subagents?
This post felt like it took a problem that I was thinking about from 3 different perspectives and combined them in a way that felt pretty coherent, though I am fully sure how right it gets it. Concretely, the 3 domains I felt it touched on were:
How much can you model human minds as consistent of subagents?
How much can problems with coherence theorems be addressed by modeling things as subagents?
How much will AI systems behave like consisting of multiple subagents?
All three of these feel pretty important to me.