The point of the virtual environment is to train an agent with a generalized ability to learn values. Eventually it would interact with humans (or perhaps human writing, depending on what works), align to our values and then be deployed. It should be fully aligned before trying to optimize the real world!
I see, so you essentially want to meta-learn value learning. Fair enough, although you then have the problem that your meta-learned value-learner might not generalize to the human-value-learning case.
I want to meta-learn value learning and then apply it to the object level case of human values. Hopefully after being tested on a variety of other agents the predictor will be able to learn human goals as well. It’s also possible that a method could be developed to test whether or not the system was properly aligned; I suspect that seeing whether or not it could output the neural net of the user would be a good test (though potentially vulnerable to deception if something goes wrong, and hard to check against humans). But if it learns to reliably understand the connections and weights of other agents, perhaps it can learn to understand the human mind as well.
The point of the virtual environment is to train an agent with a generalized ability to learn values. Eventually it would interact with humans (or perhaps human writing, depending on what works), align to our values and then be deployed. It should be fully aligned before trying to optimize the real world!
I see, so you essentially want to meta-learn value learning. Fair enough, although you then have the problem that your meta-learned value-learner might not generalize to the human-value-learning case.
I want to meta-learn value learning and then apply it to the object level case of human values. Hopefully after being tested on a variety of other agents the predictor will be able to learn human goals as well. It’s also possible that a method could be developed to test whether or not the system was properly aligned; I suspect that seeing whether or not it could output the neural net of the user would be a good test (though potentially vulnerable to deception if something goes wrong, and hard to check against humans). But if it learns to reliably understand the connections and weights of other agents, perhaps it can learn to understand the human mind as well.