Intuitively, it seems easy to make agents that are ignorant or indifferent(/”irrational”) in such a way that they will only seek to optimize things within the ontology we’ve provided (in this case, of the chess game), instead of outside (i.e. seizing additional compute)
It isn’t obvious to me that specifying the ontology is significantly easier than specifying the right objective. I have an intuition that ontological approaches are doomed. As a simple case, I’m not aware of any fundamental progress on building something that actually maximizes the number of diamonds in the physical universe, nor do I think that such a thing has a natural, simple description.
Diamond maximization seems pretty different from winning at chess. In the chess case, we’ve essentially hardcoded a particular ontology related to a particular imaginary universe, the chess universe. This isn’t a feasible approach for the diamond problem.
In any case, the reason this discussion is relevant, from my perspective, is because it’s related to the question of whether you could have a system which constructs its own superintelligent understanding of the world (e.g. using self-supervised learning), and engages in self-improvement (using some process analogous to e.g. neural architecture search) without being goal-directed. If so, you could presumably pinpoint human values/corrigibility/etc. in the model of the world that was created (using labeled data, active learning, etc.) and use that as an agent’s reward function. (Or just use the self-supervised learning system as a tool to help with FAI research/make a pivotal act/etc.)
It feels to me as though the thing I described in the previous paragraph is amenable to the same general kind of ontological whitelisting approach that we use for chess AIs. (To put it another way, I suspect most insights about meta-learning can be encoded without referring to a lot of object level content about the particular universe you find yourself building a model of.) I do think there are some safety issues with the approach I described, but they seem fairly possible to overcome.
It isn’t obvious to me that specifying the ontology is significantly easier than specifying the right objective. I have an intuition that ontological approaches are doomed. As a simple case, I’m not aware of any fundamental progress on building something that actually maximizes the number of diamonds in the physical universe, nor do I think that such a thing has a natural, simple description.
Diamond maximization seems pretty different from winning at chess. In the chess case, we’ve essentially hardcoded a particular ontology related to a particular imaginary universe, the chess universe. This isn’t a feasible approach for the diamond problem.
In any case, the reason this discussion is relevant, from my perspective, is because it’s related to the question of whether you could have a system which constructs its own superintelligent understanding of the world (e.g. using self-supervised learning), and engages in self-improvement (using some process analogous to e.g. neural architecture search) without being goal-directed. If so, you could presumably pinpoint human values/corrigibility/etc. in the model of the world that was created (using labeled data, active learning, etc.) and use that as an agent’s reward function. (Or just use the self-supervised learning system as a tool to help with FAI research/make a pivotal act/etc.)
It feels to me as though the thing I described in the previous paragraph is amenable to the same general kind of ontological whitelisting approach that we use for chess AIs. (To put it another way, I suspect most insights about meta-learning can be encoded without referring to a lot of object level content about the particular universe you find yourself building a model of.) I do think there are some safety issues with the approach I described, but they seem fairly possible to overcome.
I strongly agree.
I should’ve been more clear.
I think this is a situation where our intuition is likely wrong.
This sort of thing is why I say “I’m not satisfied with my current understanding”.