Why do you think this sort of training environment would produce friendly AGI? Can you predict what kind of goals an AGI trained in such an environment would end up with? How does it solve the standard issues of alignment like seeking convergent instrumental goals?
We have unpredictable changing goals and so will they. Instrumental convergence is the point. It’s positive-sum and winning to respectfully share our growth with them and vice-versa, so it is instrumentally convergent to do so.
Why do you think this sort of training environment would produce friendly AGI?
Can you predict what kind of goals an AGI trained in such an environment would end up with?
How does it solve the standard issues of alignment like seeking convergent instrumental goals?
We have unpredictable changing goals and so will they. Instrumental convergence is the point. It’s positive-sum and winning to respectfully share our growth with them and vice-versa, so it is instrumentally convergent to do so.