I don’t think it depends on how much A and B, because the “expected amount” is not a special point. In this context, the update that I made personally was “There are more shifts than I thought there were, therefore there’s probably more of A and B than I thought there was, therefore I should weakly update against AI safety being important.” Maybe (to make A and B more concrete) there being more shifts than I thought downgrades my opinion of the original arguments from “absolutely incredible” to “very very good”, which slightly downgrades my confidence that AI safety is important.
As a separate issue, conditional on the field being very important, I might expect the original arguments to be very very good, or I might expect them to be very good, or something else. But I don’t see how that expectation can prevent a change from “absolutely exceptional” to “very very good” from downgrading my confidence.
Ok, I think I misinterpreted when you said “I think A and B would be weak evidence against the importance of AI safety”. My current understanding is you’re saying that if you think there is more A and B (at a particular point in time) than you thought (for the same time period), then you should become less confident in the importance of AI safety (which I think makes sense). My previous interpretation was if you hadn’t updated on A and B yet (e.g., because you neglected to consider it as evidence, or because you left the field early before any A and B could have happened yet and then came back), then upon updating on the existence of A and B you should now be less confident of the importance of AI safety than you were.
Now that’s hopefully cleared up, I wonder how you used to see the history of the arguments for importance of AI safety and what (e.g., was there a paper or article that) made you think there are fewer shifts than there actually are.
I don’t think it depends on how much A and B, because the “expected amount” is not a special point. In this context, the update that I made personally was “There are more shifts than I thought there were, therefore there’s probably more of A and B than I thought there was, therefore I should weakly update against AI safety being important.” Maybe (to make A and B more concrete) there being more shifts than I thought downgrades my opinion of the original arguments from “absolutely incredible” to “very very good”, which slightly downgrades my confidence that AI safety is important.
As a separate issue, conditional on the field being very important, I might expect the original arguments to be very very good, or I might expect them to be very good, or something else. But I don’t see how that expectation can prevent a change from “absolutely exceptional” to “very very good” from downgrading my confidence.
Ok, I think I misinterpreted when you said “I think A and B would be weak evidence against the importance of AI safety”. My current understanding is you’re saying that if you think there is more A and B (at a particular point in time) than you thought (for the same time period), then you should become less confident in the importance of AI safety (which I think makes sense). My previous interpretation was if you hadn’t updated on A and B yet (e.g., because you neglected to consider it as evidence, or because you left the field early before any A and B could have happened yet and then came back), then upon updating on the existence of A and B you should now be less confident of the importance of AI safety than you were.
Now that’s hopefully cleared up, I wonder how you used to see the history of the arguments for importance of AI safety and what (e.g., was there a paper or article that) made you think there are fewer shifts than there actually are.