This is, to a certain extent, the drawing out of my true crux of a lot of a lot of my disagreements with the doomer view, especially on LW. I fundamentally suspect that the trajectory of AI has fundamentally been a story of a normal technology, and in particular I see 1 very important adversarial assumption both fail to materialize, which is evidence against it, given that absence of evidence is evidence of absence, and the assumption didn’t even buy us that much evidence, as it still isn’t predictive enough to be used for AI doom.
I’m talking about, of course, the essentially unbounded instrumental goals/powerseeking assumptions.
Indeed, it’s a pretty large crux, in that if I fundamentally believed that the adversarial framework was a useful model for AI safety, I’d be a lot more worried about AI doom today.
To put it another, this was hitting at one of my true rejections of a lot of the frameworks used often by doomers/decelerationists.
This is, to a certain extent, the drawing out of my true crux of a lot of a lot of my disagreements with the doomer view, especially on LW. I fundamentally suspect that the trajectory of AI has fundamentally been a story of a normal technology, and in particular I see 1 very important adversarial assumption both fail to materialize, which is evidence against it, given that absence of evidence is evidence of absence, and the assumption didn’t even buy us that much evidence, as it still isn’t predictive enough to be used for AI doom.
I’m talking about, of course, the essentially unbounded instrumental goals/powerseeking assumptions.
Indeed, it’s a pretty large crux, in that if I fundamentally believed that the adversarial framework was a useful model for AI safety, I’d be a lot more worried about AI doom today.
To put it another, this was hitting at one of my true rejections of a lot of the frameworks used often by doomers/decelerationists.