these methods lack enough feedback to enable self-awareness
Although I think this is plausibly the case, I’m far from confident that it’s actually true. Are there any specific limitations you think play a role here?
I think it mostly derives from the separation of training from execution into a separate phase, but also from not giving neutral networks much reflective access to the network. This sometimes exists in limited amounts in some networks, and when it does I’d say those networks are self aware, but only during the training phase, and it’s definitely not clear there’s enough complexity there to see anything like an ability for the network to conceive of itself as an ontological object, so even then it’s not obviously the same kind of self awareness as, say, humans have, but more likely a simpler kind more akin to that of insects.
Although I think this is plausibly the case, I’m far from confident that it’s actually true. Are there any specific limitations you think play a role here?
I think it mostly derives from the separation of training from execution into a separate phase, but also from not giving neutral networks much reflective access to the network. This sometimes exists in limited amounts in some networks, and when it does I’d say those networks are self aware, but only during the training phase, and it’s definitely not clear there’s enough complexity there to see anything like an ability for the network to conceive of itself as an ontological object, so even then it’s not obviously the same kind of self awareness as, say, humans have, but more likely a simpler kind more akin to that of insects.