Someone used the metaphore of Plato’s cave to describe LLMs. The LLM is sitting in cave 2, unable to see the shadows on the wall but can only hear the voices of the people in cave 1 talking about the shadows.
The problem is that we people in cave 1 are not only talking about the shadows but also telling fictional stories, and it is very difficult for someone in cave 2 to know the difference between fiction and reality.
If we want to give a future AGI the responsibility to make important decisions I think it is necessary that it occupies a space in cave 1 and not just being a statistical word predictor in cave 2. They must be more like us.
I buy this. I think a solid sense of self might be the key missing ingredient (though it’s potentially a path away from Oracles toward Agents).
A strong sense of self would require life experience, which implies memory. Probably also the ability to ruminate and generate counterfactuals.
And of course, as you say, the memories and “growing up” would need to be about experiences of the real world, or at least recordings of such experiences, or of a “real-world-like simulation”. I picture an agent growing in complexity and compute over time, while retaining a memory of its earlier stages.
Perhaps this is a different learning paradigm from gradient descent, relegating it to science fiction for now.
Someone used the metaphore of Plato’s cave to describe LLMs. The LLM is sitting in cave 2, unable to see the shadows on the wall but can only hear the voices of the people in cave 1 talking about the shadows.
The problem is that we people in cave 1 are not only talking about the shadows but also telling fictional stories, and it is very difficult for someone in cave 2 to know the difference between fiction and reality.
If we want to give a future AGI the responsibility to make important decisions I think it is necessary that it occupies a space in cave 1 and not just being a statistical word predictor in cave 2. They must be more like us.
I buy this. I think a solid sense of self might be the key missing ingredient (though it’s potentially a path away from Oracles toward Agents).
A strong sense of self would require life experience, which implies memory. Probably also the ability to ruminate and generate counterfactuals.
And of course, as you say, the memories and “growing up” would need to be about experiences of the real world, or at least recordings of such experiences, or of a “real-world-like simulation”. I picture an agent growing in complexity and compute over time, while retaining a memory of its earlier stages.
Perhaps this is a different learning paradigm from gradient descent, relegating it to science fiction for now.