This is a great comment, but you don’t need to worry that I’ll be indoctrinated!
I was actually using that terminology a bit tongue in cheek, as I perceive exactly what you say about the religious fervour of some AI alignment proponents. I think the general attitude and vibe of Yudkowsky etc is one of the main reasons I was suspicious about their arguments for AI takeoff in the first place.
I also agree, despite thinking that deceptive alignement by default is the likely outcome, which is misaligned. I too dislike much of the early writing, despite thinking AGI has a high probability this century for having too much probability on FOOM, whereas I assign a 2-3% probability on it.
However, I disagree with your criticism of thinkism primarily because my crux here is that in the past, communication about AI risk to the public had the exact opposite effect (that is, people failed to realize the Doom part, while enthusiastically embracing the Powerful part).
Another part is that even competent societies probably could slow down AGI, but if this occured during a crisis, or before it as deceptive alignment, that the society auto-loses. In other words, people assign too much probability mass to an equalized fight due to sci-fi, and the probability mass is more binary than that.
This is a great comment, but you don’t need to worry that I’ll be indoctrinated!
I was actually using that terminology a bit tongue in cheek, as I perceive exactly what you say about the religious fervour of some AI alignment proponents. I think the general attitude and vibe of Yudkowsky etc is one of the main reasons I was suspicious about their arguments for AI takeoff in the first place.
I also agree, despite thinking that deceptive alignement by default is the likely outcome, which is misaligned. I too dislike much of the early writing, despite thinking AGI has a high probability this century for having too much probability on FOOM, whereas I assign a 2-3% probability on it.
However, I disagree with your criticism of thinkism primarily because my crux here is that in the past, communication about AI risk to the public had the exact opposite effect (that is, people failed to realize the Doom part, while enthusiastically embracing the Powerful part).
Another part is that even competent societies probably could slow down AGI, but if this occured during a crisis, or before it as deceptive alignment, that the society auto-loses. In other words, people assign too much probability mass to an equalized fight due to sci-fi, and the probability mass is more binary than that.