I think evolution clearly provides some evidence for things like inner optimizers, deceptive alignment, and “AI takeoff which starts with ML and human understandable engineering (e.g. scaffolding/prompting), but where different mechansms drive further growth prior to full human obsolescence”[1].
Personally, I’m quite sympathetic overall to Zvi’s response post (which you link) and I had many of the same objections. I guess further litigation of this post (and the response in the comments) might be the way to go if you want to go down that road?
I overall tend to be pretty sympathetic to many objections to hard takeoff, “sharp left turn” concerns, and high probability on high levels of difficulty in safely navigating powerful AI. But, I still think that the “AI optimism” cluster is too dismissive of the case for despair and over confident in the case for hope. And a bunch of this argument has maybe already occured and doesn’t seem to have gotten very far. (Though the exact objections I would say to the AI optimist people are moderately different than most of what I’ve seen so far.) So, I’d be pretty sympathetic to just not trying to target them as an audience.
Note that key audiences for doom arguments are often like “somewhat sympathetic people at AI labs” and “somewhat sympathetic researchers or grantmakers who already have some probability on the threat models you outline”.
This is perhaps related to the “the sharp left turn”, but I think the “sharp left turn” concept is poorly specified and might conflate a bunch of separate (though likely correlated) things. Thus, I prefer being more precise.
I think evolution clearly provides some evidence for things like inner optimizers, deceptive alignment, and “AI takeoff which starts with ML and human understandable engineering (e.g. scaffolding/prompting), but where different mechansms drive further growth prior to full human obsolescence”[1].
Personally, I’m quite sympathetic overall to Zvi’s response post (which you link) and I had many of the same objections. I guess further litigation of this post (and the response in the comments) might be the way to go if you want to go down that road?
I overall tend to be pretty sympathetic to many objections to hard takeoff, “sharp left turn” concerns, and high probability on high levels of difficulty in safely navigating powerful AI. But, I still think that the “AI optimism” cluster is too dismissive of the case for despair and over confident in the case for hope. And a bunch of this argument has maybe already occured and doesn’t seem to have gotten very far. (Though the exact objections I would say to the AI optimist people are moderately different than most of what I’ve seen so far.) So, I’d be pretty sympathetic to just not trying to target them as an audience.
Note that key audiences for doom arguments are often like “somewhat sympathetic people at AI labs” and “somewhat sympathetic researchers or grantmakers who already have some probability on the threat models you outline”.
This is perhaps related to the “the sharp left turn”, but I think the “sharp left turn” concept is poorly specified and might conflate a bunch of separate (though likely correlated) things. Thus, I prefer being more precise.