Unfortunately, I think there’s a fundamentally inside-view aspect of [problems very different from those we’re used to]. I think looking for a range of frames is the right thing to do—but deciding on the relevance of the frame can only be done by looking at the details of the problem itself (if we instead use our usual heuristics for relevance-of-frame-x, we run into the same out-of-distribution issues).
I don’t think there’s a way around this. Aspects of this situation are fundamentally different from those we’re used to. [Is different from] is not a useful relation—we can’t get far by saying “We’ve seen [fundamentally different] situations before—what happened there?”. It’ll all come back to how they were fundamentally different.
To say something mildly more constructive, I do still think we should be considering and evaluating other frames, based on our own inside-view model (with appropriate error bars on that model).
A place I’d start here would be:
Attempt to understand another frame.
See how far I need to zoom out before that frame’s models become a reasonable abstraction for the problem-as-I-understand-it.
Find the smallest changes to my models that’d allow me to stick with this frame without zooming out so far. Assess the probability that these adjusted models are correct/useful.
For most frames, I end up needing to zoom out too far for them to say much of relevance—so this doesn’t much change my p(doom) assessment.
It seems more useful to apply other frames to evaluate smaller parts of our models. I’m sure there are a bunch of places where intuitions and models from e.g. economics or physics do apply to safety-related subproblems.
I agree with this.
Unfortunately, I think there’s a fundamentally inside-view aspect of [problems very different from those we’re used to]. I think looking for a range of frames is the right thing to do—but deciding on the relevance of the frame can only be done by looking at the details of the problem itself (if we instead use our usual heuristics for relevance-of-frame-x, we run into the same out-of-distribution issues).
I don’t think there’s a way around this. Aspects of this situation are fundamentally different from those we’re used to. [Is different from] is not a useful relation—we can’t get far by saying “We’ve seen [fundamentally different] situations before—what happened there?”. It’ll all come back to how they were fundamentally different.
To say something mildly more constructive, I do still think we should be considering and evaluating other frames, based on our own inside-view model (with appropriate error bars on that model).
A place I’d start here would be:
Attempt to understand another frame.
See how far I need to zoom out before that frame’s models become a reasonable abstraction for the problem-as-I-understand-it.
Find the smallest changes to my models that’d allow me to stick with this frame without zooming out so far. Assess the probability that these adjusted models are correct/useful.
For most frames, I end up needing to zoom out too far for them to say much of relevance—so this doesn’t much change my p(doom) assessment.
It seems more useful to apply other frames to evaluate smaller parts of our models. I’m sure there are a bunch of places where intuitions and models from e.g. economics or physics do apply to safety-related subproblems.