I agree that the AI would only learn the abstraction layers it’d have a use for. But I wouldn’t take it as far as you do. I agree that with “human values” specifically, the problem may be just that muddled, but with none of the other nice targets — moral philosophy, corrigibility, DWIM, they should be more concrete.
The alternative would be a straight-up failure of the NAH, I think; your assertion that “abstractions can be on a continuum” seems directly at odds with it. Which isn’t impossible, but this post is premised on the NAH working.
I agree that the AI would only learn the abstraction layers it’d have a use for. But I wouldn’t take it as far as you do. I agree that with “human values” specifically, the problem may be just that muddled, but with none of the other nice targets — moral philosophy, corrigibility, DWIM, they should be more concrete.
The alternative would be a straight-up failure of the NAH, I think; your assertion that “abstractions can be on a continuum” seems directly at odds with it. Which isn’t impossible, but this post is premised on the NAH working.