On the other hand, given that humans (especially on LW) do analyze things on several meta levels, it seems possible to program an AI to do the same, and in fact many discussions of AI assume this (e.g. discussing whether the AI will suspect it’s trapped in some simulation). It’s an interesting question how intelligent can an AI get without having the need (or ability) to go meta.
Also true. Indeed, this puzzle is all about resolving confusion between object and meta level(s); hopefully no one here at LW endorses the view that a (sufficiently well programmed) AI is incapable of going meta, so to speak.
On the other hand, given that humans (especially on LW) do analyze things on several meta levels, it seems possible to program an AI to do the same, and in fact many discussions of AI assume this (e.g. discussing whether the AI will suspect it’s trapped in some simulation). It’s an interesting question how intelligent can an AI get without having the need (or ability) to go meta.
Also true. Indeed, this puzzle is all about resolving confusion between object and meta level(s); hopefully no one here at LW endorses the view that a (sufficiently well programmed) AI is incapable of going meta, so to speak.
I wonder how one would calculate what level of meta-knowledge about a completeness condition is necessary for a some priority task.