The examples of human cognition you point to are the dumbest parts of human cognition. They are the parts we need to override in order to pursue non-standard goals. For example, in political arguments, the adaptations that we execute that make us attached to one position are bad. They are harmful to our goal of implementing effective policy. People who are good at finding effective government policy are good at overriding these adaptations.
“All these are part of the arbitrary, intrinsically-complex, outside world.” This seems wrong. The outside world isn’t that complex, and reflections of it are similarly not that complex. Hardcoding knowledge is a mistake, of course, but understanding a knowledge representation and updating process needn’t be that hard.
“They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity.” I agree with this, but it’s also fairly obvious. The difficulty of alignment is building these in such a way that you can predict that they will continue to work, despite the context changes that occur as an AI scales up to be much more intelligent.
“These bitter lessons were taught to us by deep learning.” It looks to me like deep learning just gave most people an excuse to not think very much about how the machine is working on the inside. It became tractable to build useful machines without understanding why they worked.
It sounds like you’re saying that classical alignment theory violates lessons like “we shouldn’t hardcode knowledge, it should instead be learned by very general methods”. This is clearly untrue, but if this isn’t what you meant then I don’t understand the purpose of the last quote. Maybe a more charitable interpretation is that you think the lesson is “intelligence is irreducibly complex and it’s impossible to understand why it works”. But this is contradicted by the first quote. The meta-methods are a part of a mind that can and should be understood. And this is exactly the topic that much of agent foundations research has been about (with a particular focus on the aspects that are relevant to maintaining stability through context changes).
(My impression was that this is also what shard theory is trying to do, except with less focus on stability through context changes, much less emphasis on fully-general outcome-directedness, and more focus on high-level steering-of-plans-during-execution instead of the more traditional precise-specification-of-outcomes).
The reasons I don’t find this convincing:
The examples of human cognition you point to are the dumbest parts of human cognition. They are the parts we need to override in order to pursue non-standard goals. For example, in political arguments, the adaptations that we execute that make us attached to one position are bad. They are harmful to our goal of implementing effective policy. People who are good at finding effective government policy are good at overriding these adaptations.
“All these are part of the arbitrary, intrinsically-complex, outside world.” This seems wrong. The outside world isn’t that complex, and reflections of it are similarly not that complex. Hardcoding knowledge is a mistake, of course, but understanding a knowledge representation and updating process needn’t be that hard.
“They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity.” I agree with this, but it’s also fairly obvious. The difficulty of alignment is building these in such a way that you can predict that they will continue to work, despite the context changes that occur as an AI scales up to be much more intelligent.
“These bitter lessons were taught to us by deep learning.” It looks to me like deep learning just gave most people an excuse to not think very much about how the machine is working on the inside. It became tractable to build useful machines without understanding why they worked.
It sounds like you’re saying that classical alignment theory violates lessons like “we shouldn’t hardcode knowledge, it should instead be learned by very general methods”. This is clearly untrue, but if this isn’t what you meant then I don’t understand the purpose of the last quote. Maybe a more charitable interpretation is that you think the lesson is “intelligence is irreducibly complex and it’s impossible to understand why it works”. But this is contradicted by the first quote. The meta-methods are a part of a mind that can and should be understood. And this is exactly the topic that much of agent foundations research has been about (with a particular focus on the aspects that are relevant to maintaining stability through context changes).
(My impression was that this is also what shard theory is trying to do, except with less focus on stability through context changes, much less emphasis on fully-general outcome-directedness, and more focus on high-level steering-of-plans-during-execution instead of the more traditional precise-specification-of-outcomes).