Or, to put it another way, to try and require a knowledge above “merely” storing a number, that includes “knowing you know” and “knowing you know you know” and so on, is to make a similar mistake to those who postulated a homunculus inside our heads, doing the looking when we look at things.
On the other hand, given that humans (especially on LW) do analyze things on several meta levels, it seems possible to program an AI to do the same, and in fact many discussions of AI assume this (e.g. discussing whether the AI will suspect it’s trapped in some simulation). It’s an interesting question how intelligent can an AI get without having the need (or ability) to go meta.
Also true. Indeed, this puzzle is all about resolving confusion between object and meta level(s); hopefully no one here at LW endorses the view that a (sufficiently well programmed) AI is incapable of going meta, so to speak.
Or, to put it another way, to try and require a knowledge above “merely” storing a number, that includes “knowing you know” and “knowing you know you know” and so on, is to make a similar mistake to those who postulated a homunculus inside our heads, doing the looking when we look at things.
On the other hand, given that humans (especially on LW) do analyze things on several meta levels, it seems possible to program an AI to do the same, and in fact many discussions of AI assume this (e.g. discussing whether the AI will suspect it’s trapped in some simulation). It’s an interesting question how intelligent can an AI get without having the need (or ability) to go meta.
Also true. Indeed, this puzzle is all about resolving confusion between object and meta level(s); hopefully no one here at LW endorses the view that a (sufficiently well programmed) AI is incapable of going meta, so to speak.
I wonder how one would calculate what level of meta-knowledge about a completeness condition is necessary for a some priority task.