(I do think ontological shifts continue to be relevant to my description of the problem, but I’ve never been convinced that we should be particularly worried about ontological shifts, except inasmuch as they are one type of possible inner alignment / robustness failure.)
I think I agree at least that many problems can be seen this way, but I suspect that other framings are more useful for solutions. (I don’t think I can explain why here, though I am working on a longer explanation of what framings I like and why.)
What I was claiming in the sentence you quoted was that I don’t see ontological shifts as a huge additional category of problem that isn’t covered by other problems, which is compatible with saying that ontological shifts can also represent many other problems.
I feel that the whole AI alignment problem can be seen as problems with ontological shifts: https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1
I think I agree at least that many problems can be seen this way, but I suspect that other framings are more useful for solutions. (I don’t think I can explain why here, though I am working on a longer explanation of what framings I like and why.)
What I was claiming in the sentence you quoted was that I don’t see ontological shifts as a huge additional category of problem that isn’t covered by other problems, which is compatible with saying that ontological shifts can also represent many other problems.
Cheers, that would be very useful.