Ok, I think I misinterpreted when you said “I think A and B would be weak evidence against the importance of AI safety”. My current understanding is you’re saying that if you think there is more A and B (at a particular point in time) than you thought (for the same time period), then you should become less confident in the importance of AI safety (which I think makes sense). My previous interpretation was if you hadn’t updated on A and B yet (e.g., because you neglected to consider it as evidence, or because you left the field early before any A and B could have happened yet and then came back), then upon updating on the existence of A and B you should now be less confident of the importance of AI safety than you were.
Now that’s hopefully cleared up, I wonder how you used to see the history of the arguments for importance of AI safety and what (e.g., was there a paper or article that) made you think there are fewer shifts than there actually are.
Ok, I think I misinterpreted when you said “I think A and B would be weak evidence against the importance of AI safety”. My current understanding is you’re saying that if you think there is more A and B (at a particular point in time) than you thought (for the same time period), then you should become less confident in the importance of AI safety (which I think makes sense). My previous interpretation was if you hadn’t updated on A and B yet (e.g., because you neglected to consider it as evidence, or because you left the field early before any A and B could have happened yet and then came back), then upon updating on the existence of A and B you should now be less confident of the importance of AI safety than you were.
Now that’s hopefully cleared up, I wonder how you used to see the history of the arguments for importance of AI safety and what (e.g., was there a paper or article that) made you think there are fewer shifts than there actually are.