If you know what it’ll look like if you were wrong about any of these, then that would be uncertainty that’s captured by your model.
Obviously there’s a spectrum here, but it sounds like you’re describing something significantly more specific than the central examples of Knightian uncertainty in my mind. E.g. to me Knightian uncertainty looks less like uncertainty about how far AI can bootstrap, and more like uncertainty about whether our current conception of “bootstrapping” will even turn out to be a meaningful and relevant concept.
My nearby reply is most of my answer here. I know how to tell when reality is off-the-rails wrt to my model, because my model is made of falsifiable parts. I can even tell you about what those parts are, and about the rails I’m expecting reality to stay on.
When I try to cache out your example, “maybe the whole way I’m thinking about bootstrapping isn’t meaningful/useful”, it doesn’t seem like it’s outside my meta-model? I don’t think I have to do anything differently to handle it?
Specifically, my “bootstrapping” concept comes with some concrete pictures of how things go. I currently find the concept “meaningful/useful” because I expect these concrete pictures to be instantiated in reality. (Mostly because I think expect reality to admit the “bootstrapping” I’m picturing, and I expect advanced AI to be able to find it). If reality goes off-my-rails about my concept mattering, it will be because things don’t apply in the way I’m thinking, and there were some other pathways I should have been attending to instead.
tl;dr I think you can improve on “my models might break for an unknown reason” if you can name the main categories of model-breaking unknowns
If you know what it’ll look like if you were wrong about any of these, then that would be uncertainty that’s captured by your model.
Obviously there’s a spectrum here, but it sounds like you’re describing something significantly more specific than the central examples of Knightian uncertainty in my mind. E.g. to me Knightian uncertainty looks less like uncertainty about how far AI can bootstrap, and more like uncertainty about whether our current conception of “bootstrapping” will even turn out to be a meaningful and relevant concept.
My nearby reply is most of my answer here. I know how to tell when reality is off-the-rails wrt to my model, because my model is made of falsifiable parts. I can even tell you about what those parts are, and about the rails I’m expecting reality to stay on.
When I try to cache out your example, “maybe the whole way I’m thinking about bootstrapping isn’t meaningful/useful”, it doesn’t seem like it’s outside my meta-model? I don’t think I have to do anything differently to handle it?
Specifically, my “bootstrapping” concept comes with some concrete pictures of how things go. I currently find the concept “meaningful/useful” because I expect these concrete pictures to be instantiated in reality. (Mostly because I think expect reality to admit the “bootstrapping” I’m picturing, and I expect advanced AI to be able to find it). If reality goes off-my-rails about my concept mattering, it will be because things don’t apply in the way I’m thinking, and there were some other pathways I should have been attending to instead.