It’s a common crux between me and MIRI / rationalist types in AI safety, and it’s way easier to say “Realism about rationality” than to engage in an endless debate about whether everything is approximating AIXI or whatever that never seems to update me.
It’s a common crux between me and MIRI / rationalist types in AI safety, and it’s way easier to say “Realism about rationality” than to engage in an endless debate about whether everything is approximating AIXI or whatever that never seems to update me.