To be clear, I’d agree that the use of the phrase “algorithmic complexity” in the quote you give is misleading. In particular, given an AI designed such that its preferences can be specified in some stable way, the important question is whether the correct concept of ‘value’ is simple relative to some language that specifies this AI’s concepts. And the AI’s concepts are ofc formed in response to its entire observational history. Concepts that are simple relative to everything the AI has seen might be quite complex relative to “normal” reference machines that people intuitively think of when they hear “algorithmic complexity” (like the lambda calculus, say). And so it maybe true that value is complex relative to a “normal” reference machine, and simple relative to the AI’s observational history, thereby turning out not to pose all that much of an alignment obstacle.
In that case (which I don’t particularly expect), I’d say “value was in fact complex, and this turned out not to be a great obstacle to alignment” (though I wouldn’t begrudge someone else saying “I define complexity of value relative to the AI’s observation-history, and in that sense, value turned out to be simple”).
Insofar as you are arguing “(1) the arbital page on complexity of value does not convincingly argue that this will matter to alignment in practice, and (2) LLMs are significant evidence that ‘value’ won’t be complex relative to the actual AI concept-languages we’re going to get”, I agree with (1), and disagree with (2), while again noting that there’s a reason I deployed the fragility of value (and not the complexity of value) in response to your original question (and am only discussing complexity of value here because you brought it up).
re: (1), I note that the argument is elsewhere (and has the form “there will be lots of nearby concepts” + “getting almost the right concept does not get you almost a good result”, as I alluded to above). I’d agree that one leg of possible support for this argument (namely “humanity will be completely foreign to this AI, e.g. because it is a mathematically simple seed AI that has grown with very little exposure to humanity”) won’t apply in the case of LLMs. (I don’t particularly recall past people arguing this; my impression is rather one of past people arguing that of course the AI would be able to read wikipedia and stare at some humans and figure out what it needs to about this ‘value’ concept, but the hard bit is in making it care. But it is a way things could in principle have gone, that would have made complexity-of-value much more of an obstacle, and things did not in fact go that way.)
re: (2), I just don’t see LLMs as providing much evidence yet about whether the concepts they’re picking up are compact or correct (cf. monkeys don’t have an IGF concept).
Okay, that clarifies a lot. But the last paragraph I find surprising.
re: (2), I just don’t see LLMs as providing much evidence yet about whether the concepts they’re picking up are compact or correct (cf. monkeys don’t have an IGF concept).
If LLMs are good at understanding the meaning of human text, they must to be good at understanding human concepts, since concepts are just meanings of words the LLM understands. Do you doubt they are really understanding text as well as it seems? Or do you mean they are picking up other, non-human, concepts as well, and this is a problem?
Regarding monkeys, they apparently don’t understand the IGF concept as they are not good enough at reasoning abstractly about evolution and unobservable entities (genes), and they lack the empirical knowledge like humans until recently. I’m not sure how that would be an argument against advanced LLMs grasping the concepts they seem to grasp.
To be clear, I’d agree that the use of the phrase “algorithmic complexity” in the quote you give is misleading. In particular, given an AI designed such that its preferences can be specified in some stable way, the important question is whether the correct concept of ‘value’ is simple relative to some language that specifies this AI’s concepts. And the AI’s concepts are ofc formed in response to its entire observational history. Concepts that are simple relative to everything the AI has seen might be quite complex relative to “normal” reference machines that people intuitively think of when they hear “algorithmic complexity” (like the lambda calculus, say). And so it maybe true that value is complex relative to a “normal” reference machine, and simple relative to the AI’s observational history, thereby turning out not to pose all that much of an alignment obstacle.
In that case (which I don’t particularly expect), I’d say “value was in fact complex, and this turned out not to be a great obstacle to alignment” (though I wouldn’t begrudge someone else saying “I define complexity of value relative to the AI’s observation-history, and in that sense, value turned out to be simple”).
Insofar as you are arguing “(1) the arbital page on complexity of value does not convincingly argue that this will matter to alignment in practice, and (2) LLMs are significant evidence that ‘value’ won’t be complex relative to the actual AI concept-languages we’re going to get”, I agree with (1), and disagree with (2), while again noting that there’s a reason I deployed the fragility of value (and not the complexity of value) in response to your original question (and am only discussing complexity of value here because you brought it up).
re: (1), I note that the argument is elsewhere (and has the form “there will be lots of nearby concepts” + “getting almost the right concept does not get you almost a good result”, as I alluded to above). I’d agree that one leg of possible support for this argument (namely “humanity will be completely foreign to this AI, e.g. because it is a mathematically simple seed AI that has grown with very little exposure to humanity”) won’t apply in the case of LLMs. (I don’t particularly recall past people arguing this; my impression is rather one of past people arguing that of course the AI would be able to read wikipedia and stare at some humans and figure out what it needs to about this ‘value’ concept, but the hard bit is in making it care. But it is a way things could in principle have gone, that would have made complexity-of-value much more of an obstacle, and things did not in fact go that way.)
re: (2), I just don’t see LLMs as providing much evidence yet about whether the concepts they’re picking up are compact or correct (cf. monkeys don’t have an IGF concept).
Okay, that clarifies a lot. But the last paragraph I find surprising.
If LLMs are good at understanding the meaning of human text, they must to be good at understanding human concepts, since concepts are just meanings of words the LLM understands. Do you doubt they are really understanding text as well as it seems? Or do you mean they are picking up other, non-human, concepts as well, and this is a problem?
Regarding monkeys, they apparently don’t understand the IGF concept as they are not good enough at reasoning abstractly about evolution and unobservable entities (genes), and they lack the empirical knowledge like humans until recently. I’m not sure how that would be an argument against advanced LLMs grasping the concepts they seem to grasp.