Regarding how interpretability can help with addressing motivation issues, I think Chris Olah’s views present situations where interpretability can potentially sidestep some of those issues. One such example is that if we use interpretability to aid in model design, we might have confidence that our system isn’t a mesa-optimizer, and we’ve done this without explicitly asking questions about “what our model desires”.
I agree that this is far from the whole picture. The scenario you describe is an example where we’d want to make interpretability more accessible to more end-users. There is definitely more work to be done to bridge “normal” human explanations with what we can get from our analysis.
I’ve spent more of my time thinking about the technical sub-areas, so I’m focused on situations where innovations there can be useful. I don’t mean to say that this is the only place where I think progress is useful.
I’ve spent more of my time thinking about the technical sub-areas, so I’m focused on situations where innovations there can be useful. I don’t mean to say that this is the only place where I think progress is useful.
That seems more than reasonable to me, given the current state of AI development.
Thanks for sharing your reflections on my comment.
Regarding how interpretability can help with addressing motivation issues, I think Chris Olah’s views present situations where interpretability can potentially sidestep some of those issues. One such example is that if we use interpretability to aid in model design, we might have confidence that our system isn’t a mesa-optimizer, and we’ve done this without explicitly asking questions about “what our model desires”.
I agree that this is far from the whole picture. The scenario you describe is an example where we’d want to make interpretability more accessible to more end-users. There is definitely more work to be done to bridge “normal” human explanations with what we can get from our analysis.
I’ve spent more of my time thinking about the technical sub-areas, so I’m focused on situations where innovations there can be useful. I don’t mean to say that this is the only place where I think progress is useful.
That seems more than reasonable to me, given the current state of AI development.
Thanks for sharing your reflections on my comment.