(This is a repost of my comment on John’s “My AI Model Delta Compared To Yudkowsky” post which I wrote a few months ago. I think points 2-6 (especially 5 and 6) describe important and neglected difficulties of AI alignment.)
My model (which is pretty similar to my model of Eliezer’s model) does not match your model of Eliezer’s model. Here’s my model, and I’d guess that Eliezer’s model mostly agrees with it:
Natural abstractions (very) likely exist in some sense. Concepts like “chair” and “temperature” and “carbon” and “covalent bond” all seem natural in some sense, and an AI might model them too (though perhaps at significantly superhuman levels of intelligence it rather uses different concepts/models). (Also it’s not quite as clear whether such natural abstractions actually apply very well to giant transformers (though still probable in some sense IMO, but it’s perhaps hard to identify them and to interpret what “concepts” actually are in AIs).)
Many things we value are not natural abstractions, but only natural relative to a human mind design. Emotions like “awe” or “laughter” are quite complex things evolved by evolution, and perhaps minds that have emotions at all are just a small space in minddesignspace. The AI doesn’t have built-in machinery for modelling other humans the way humans model other humans. It might eventually form abstractions for the emotions, but probably not in a way it understands “how the emotion feels from the inside”.
There is lots of hidden complexity in what determines human values. Trying to point an AI to human values directly (in a similar way to how humans are pointed to their values) would be incredibly complex. Specifying a CEV process / modelling one or multiple humans and identifying in the model where the values are represented and pointing the AI to optimize those values is more tractable, but would still require a vastly greater mastering of understanding of minds to pull of, and we are not on a path to get there without human-augmentation.
When the AI is smarter than us it will have better models which we don’t understand, and the concepts it uses will diverge from the concepts we use. As an analogy, consider 19th-century humans (or people who don’t know much about medicine) being able to vaguely classify health symptoms into diseases, vs the AI having a gears-level model of the body and the immune system which explains the observed symptoms.
I think a large part of what Eliezer meant with Lethalities#33 is that the way thinking works deep in your mind looks very different from the English sentences which you can notice going through your mind and which are only shallow shadows of what actual thinking is going on in your mind; and for giant transformers the way the actual thinking looks there is likely even a lot less understandable from the way the actual thinking looks in humans.
Ontology idenfication (including utility rebinding) is not nearly all of the difficulty of the alignment problem (except possibly in so far as figuring out all the (almost-)ideal frames to model and construct AI cognition is a requisite to solving ontology identification). Other difficulties include:
There are lots of things that might cause goal drift; misaligned mesa-optimizers which try to steer or get control of the AI; Goodhart; the AI might just not be smart enough initially and make mistakes which cause irrevocable value-drift; and in general it’s hard to train the AI to become smarter / train better optimization algorithms, while keeping the goal constant.
(Corrigibility.)
While it’s nice that John is attacking ontology identification, he doesn’t seem nearly as much on track to solve it in time as he seems to think. Specifying a goal in the AI’s ontology requires finding the right frames for modelling how an AI imagines possible worldstates, which will likely look very different from how we initially naively think of it (e.g. the worldstates won’t be modelled by english-language sentences or anything remotely as interpretable). The way we currently think of what “concepts” are might not naturally bind to anything in how the AI’s reasoning looks like, and we first need to find the right way to model AI cognition and then try to interpret what the AI is imagining. Even if “concept” is a natural abstraction on AI cognition, and we’d be able to identify them (though it’s not that easy to concretely imagine how that might look like for giant transformers), we’d still need to figure out how to combine concepts into worldstates so we can then specify a utility function over those.
(This is a repost of my comment on John’s “My AI Model Delta Compared To Yudkowsky” post which I wrote a few months ago. I think points 2-6 (especially 5 and 6) describe important and neglected difficulties of AI alignment.)
My model (which is pretty similar to my model of Eliezer’s model) does not match your model of Eliezer’s model. Here’s my model, and I’d guess that Eliezer’s model mostly agrees with it:
Natural abstractions (very) likely exist in some sense. Concepts like “chair” and “temperature” and “carbon” and “covalent bond” all seem natural in some sense, and an AI might model them too (though perhaps at significantly superhuman levels of intelligence it rather uses different concepts/models). (Also it’s not quite as clear whether such natural abstractions actually apply very well to giant transformers (though still probable in some sense IMO, but it’s perhaps hard to identify them and to interpret what “concepts” actually are in AIs).)
Many things we value are not natural abstractions, but only natural relative to a human mind design. Emotions like “awe” or “laughter” are quite complex things evolved by evolution, and perhaps minds that have emotions at all are just a small space in minddesignspace. The AI doesn’t have built-in machinery for modelling other humans the way humans model other humans. It might eventually form abstractions for the emotions, but probably not in a way it understands “how the emotion feels from the inside”.
There is lots of hidden complexity in what determines human values. Trying to point an AI to human values directly (in a similar way to how humans are pointed to their values) would be incredibly complex. Specifying a CEV process / modelling one or multiple humans and identifying in the model where the values are represented and pointing the AI to optimize those values is more tractable, but would still require a vastly greater mastering of understanding of minds to pull of, and we are not on a path to get there without human-augmentation.
When the AI is smarter than us it will have better models which we don’t understand, and the concepts it uses will diverge from the concepts we use. As an analogy, consider 19th-century humans (or people who don’t know much about medicine) being able to vaguely classify health symptoms into diseases, vs the AI having a gears-level model of the body and the immune system which explains the observed symptoms.
I think a large part of what Eliezer meant with Lethalities#33 is that the way thinking works deep in your mind looks very different from the English sentences which you can notice going through your mind and which are only shallow shadows of what actual thinking is going on in your mind; and for giant transformers the way the actual thinking looks there is likely even a lot less understandable from the way the actual thinking looks in humans.
Ontology idenfication (including utility rebinding) is not nearly all of the difficulty of the alignment problem (except possibly in so far as figuring out all the (almost-)ideal frames to model and construct AI cognition is a requisite to solving ontology identification). Other difficulties include:
We won’t get a retargetable general purpose search by default, but rather the AI is (by default) going to be a mess of lots of patched-together optimization patterns.
There are lots of things that might cause goal drift; misaligned mesa-optimizers which try to steer or get control of the AI; Goodhart; the AI might just not be smart enough initially and make mistakes which cause irrevocable value-drift; and in general it’s hard to train the AI to become smarter / train better optimization algorithms, while keeping the goal constant.
(Corrigibility.)
While it’s nice that John is attacking ontology identification, he doesn’t seem nearly as much on track to solve it in time as he seems to think. Specifying a goal in the AI’s ontology requires finding the right frames for modelling how an AI imagines possible worldstates, which will likely look very different from how we initially naively think of it (e.g. the worldstates won’t be modelled by english-language sentences or anything remotely as interpretable). The way we currently think of what “concepts” are might not naturally bind to anything in how the AI’s reasoning looks like, and we first need to find the right way to model AI cognition and then try to interpret what the AI is imagining. Even if “concept” is a natural abstraction on AI cognition, and we’d be able to identify them (though it’s not that easy to concretely imagine how that might look like for giant transformers), we’d still need to figure out how to combine concepts into worldstates so we can then specify a utility function over those.