I think the biggest piece of an actual GI that is missing from text extenders is agency
Responding to prompts and answering questions is one thing, but deciding what to do/write about next isn’t even a theoretical part of thier functionality.
I’m puzzled by the apparent tension between upvoting importance of continuous learning on one hand and downvoting agreement with agency on the other hand. When transformers produce something that sounds not from humans, it’s usually because of consistency mistakes (like telling at length that it can’t speak danish… in well formed danish sentences). Maybe it’s true that continuous learning can solve the problem (if that includes learning from its own response maybe?). But wouldn’t we perceived that as exhibiting agency?
That doesn’t seem like it would be a problem if it was connected to something where people constantly interacted with it. Then the model’s actions would be outputted constantly, and it seems like there would be no important difference between that and it acting unprompted (heh).
That’s true. Earth doesn’t act like an intelligent agent, but a model could. A current model could simulate the verbal output of a human, and that output could be connected to some actuators (or biological humans) that would allow it to act in the world. Also, Earth can’t comprehend new concepts, correctly apply them and solve problems.
I think the biggest piece of an actual GI that is missing from text extenders is agency Responding to prompts and answering questions is one thing, but deciding what to do/write about next isn’t even a theoretical part of thier functionality.
I’m puzzled by the apparent tension between upvoting importance of continuous learning on one hand and downvoting agreement with agency on the other hand. When transformers produce something that sounds not from humans, it’s usually because of consistency mistakes (like telling at length that it can’t speak danish… in well formed danish sentences). Maybe it’s true that continuous learning can solve the problem (if that includes learning from its own response maybe?). But wouldn’t we perceived that as exhibiting agency?
That doesn’t seem like it would be a problem if it was connected to something where people constantly interacted with it. Then the model’s actions would be outputted constantly, and it seems like there would be no important difference between that and it acting unprompted (heh).
The physical world is also acting continuously based on inputs it receives from people, and we don’t say “The Earth” is an intelligence.
That’s true. Earth doesn’t act like an intelligent agent, but a model could. A current model could simulate the verbal output of a human, and that output could be connected to some actuators (or biological humans) that would allow it to act in the world. Also, Earth can’t comprehend new concepts, correctly apply them and solve problems.