More generally this is tied to an argument for why deep learning is unlikely to lead to AGI: these methods lack enough feedback to enable self-awareness, and that is probably necessary for creating a general intelligence in that sense that a general intelligence needs to be flexible enough to remake itself to handle new tasks independently and that requires a deep reflective capabilities.
these methods lack enough feedback to enable self-awareness
Although I think this is plausibly the case, I’m far from confident that it’s actually true. Are there any specific limitations you think play a role here?
I think it mostly derives from the separation of training from execution into a separate phase, but also from not giving neutral networks much reflective access to the network. This sometimes exists in limited amounts in some networks, and when it does I’d say those networks are self aware, but only during the training phase, and it’s definitely not clear there’s enough complexity there to see anything like an ability for the network to conceive of itself as an ontological object, so even then it’s not obviously the same kind of self awareness as, say, humans have, but more likely a simpler kind more akin to that of insects.
More generally this is tied to an argument for why deep learning is unlikely to lead to AGI: these methods lack enough feedback to enable self-awareness, and that is probably necessary for creating a general intelligence in that sense that a general intelligence needs to be flexible enough to remake itself to handle new tasks independently and that requires a deep reflective capabilities.
Although I think this is plausibly the case, I’m far from confident that it’s actually true. Are there any specific limitations you think play a role here?
I think it mostly derives from the separation of training from execution into a separate phase, but also from not giving neutral networks much reflective access to the network. This sometimes exists in limited amounts in some networks, and when it does I’d say those networks are self aware, but only during the training phase, and it’s definitely not clear there’s enough complexity there to see anything like an ability for the network to conceive of itself as an ontological object, so even then it’s not obviously the same kind of self awareness as, say, humans have, but more likely a simpler kind more akin to that of insects.