On the last evaluation problem: One could give an initial set of indicators of trustworthiness, deception, and alignment; this does not solve the issue of an initial deceptive agent misleading babyAGI or inconsistencies. If attaching meta-data about sourcing is possible, i.e., where/with whom an input was acquired, the babyAGI could also sort it into approach box and re-evaluate the learning later, or could attempt to relearn.
Further suppose we impose requirement for double feedback before acceptance by the deceptive agent and trustworthy trainer, babyAGI could include negative feedback from a trainer (developer or more advanced stable version). That might help stall a bit.
Hey thank you for the comments! (Sorry for slow response i’ll try reply in line).
1) So i think input sourcing could be a great solution! However one issue we have especially with current systems (and in particular Independent Reinforcement Learning) is that it’s really really difficult to disentangle other-agents from the environment. As a premise, imagine watching a law of nature and not being able to work out if this a learned behaviour or some omniscient being. Agents need not come conveniently packaged in some “sensors-actuators-internal structure-utility function” form [1].
2) I think you’ve actually alluded to the class of solutions I see for multi-agent issues. Agents in the environment can shape other opponents learning, and as such can move entire populations to more stable equilibria (and behaviours). There are some great solutions that are starting to look at this [2, 3] and it’s something I’m spending time developing currently.
Interesting read! Thank you.
On the last evaluation problem: One could give an initial set of indicators of trustworthiness, deception, and alignment; this does not solve the issue of an initial deceptive agent misleading babyAGI or inconsistencies. If attaching meta-data about sourcing is possible, i.e., where/with whom an input was acquired, the babyAGI could also sort it into approach box and re-evaluate the learning later, or could attempt to relearn.
Further suppose we impose requirement for double feedback before acceptance by the deceptive agent and trustworthy trainer, babyAGI could include negative feedback from a trainer (developer or more advanced stable version). That might help stall a bit.
Hey thank you for the comments! (Sorry for slow response i’ll try reply in line).
1) So i think input sourcing could be a great solution! However one issue we have especially with current systems (and in particular Independent Reinforcement Learning) is that it’s really really difficult to disentangle other-agents from the environment. As a premise, imagine watching a law of nature and not being able to work out if this a learned behaviour or some omniscient being. Agents need not come conveniently packaged in some “sensors-actuators-internal structure-utility function” form [1].
2) I think you’ve actually alluded to the class of solutions I see for multi-agent issues. Agents in the environment can shape other opponents learning, and as such can move entire populations to more stable equilibria (and behaviours). There are some great solutions that are starting to look at this [2, 3] and it’s something I’m spending time developing currently.
[1] https://www.lesswrong.com/posts/ieYF9dgQbE9NGoGNH/detecting-agents-and-subagents
[2] LOLA—https://arxiv.org/abs/1709.04326
[3] MFOS—https://arxiv.org/abs/2205.01447