I think that AIs being able to access their own thoughts probably needs more work to show that it is actually the case. Certainly the state of the art AIs now, e.g. GPT3 or PaLM, have if anything less access to their own state than people. They can’t introspect really, all they can do is process the data that they are given.
Maybe that will change, but as you note, the configuration space of intelligence is large, and it could easily be that we don’t end up with that particular ability, it seems to me.
I have similar reservations about the next one, thoughts of others, though you do caveat that one.
One thing that might be missing is that humans tend to have a defined location—I know where I am, and “where I am” has a relatively clear definition. That may not hold for AIs which are much more loosely coupled to the computers running them.
I agree that current AIs can not introspect. My own research has bled into my believes here. I am actually working on this problem, and I expect that we won’t get anything like AGI until we have solved this issue. As far as I can tell, an AI that works properly and has any chance to become an AGI will necessarily have to be able to introspect. Many of the big open problems in the field seem to me like they can’t be solved precisely because we haven’t figured out how to do this, yet.
The “defined location” point you note is intended to be covered by “being sure about the nature of your reality”, but it’s much more specific, and you are right that it might be worth considering as a separate point.
I think that AIs being able to access their own thoughts probably needs more work to show that it is actually the case. Certainly the state of the art AIs now, e.g. GPT3 or PaLM, have if anything less access to their own state than people. They can’t introspect really, all they can do is process the data that they are given.
Maybe that will change, but as you note, the configuration space of intelligence is large, and it could easily be that we don’t end up with that particular ability, it seems to me.
I have similar reservations about the next one, thoughts of others, though you do caveat that one.
One thing that might be missing is that humans tend to have a defined location—I know where I am, and “where I am” has a relatively clear definition. That may not hold for AIs which are much more loosely coupled to the computers running them.
I agree that current AIs can not introspect. My own research has bled into my believes here. I am actually working on this problem, and I expect that we won’t get anything like AGI until we have solved this issue. As far as I can tell, an AI that works properly and has any chance to become an AGI will necessarily have to be able to introspect. Many of the big open problems in the field seem to me like they can’t be solved precisely because we haven’t figured out how to do this, yet.
The “defined location” point you note is intended to be covered by “being sure about the nature of your reality”, but it’s much more specific, and you are right that it might be worth considering as a separate point.