They can if researchers (intentionally or accidentally) turn the SL/UL system into a goal based agent. For example, imagine a SayCan-like system which uses a language model to create plans, and then a robotic system to execute those plans. I’m personally not sure how likely this is to happen by accident, but I think this is very likely to happen intentionally anyway.
Can non Reinforcement Learning systems (SL or UL) become AGI/ Superintelligence and take over the world? If so, can you give an example?
They can if researchers (intentionally or accidentally) turn the SL/UL system into a goal based agent. For example, imagine a SayCan-like system which uses a language model to create plans, and then a robotic system to execute those plans. I’m personally not sure how likely this is to happen by accident, but I think this is very likely to happen intentionally anyway.