I at first also downvoted because your first argument looks incredibly weak (this post has little relation to arguing for/against the difficulty of the alignment problem, what update are you getting on that from here?), as did the followup ‘all we need is...’ which is formulation which hides problems instead of solving them. Yet, your last point does have import and that you explicitly stated that is useful in allowing everyone to address it, so I reverted to an upvote for honesty, though strong disagree.
To the point, I also want to avoid being in a doomist cult. I’m not a die hard long term “we’re doomed if don’t align AI” guy, but from my readings throughout the last year am indeed getting convinced of the urgency of the problem. Am I getting hoodwinked by a doomist cult with very persuasive rhetoric? Am I myself hoodwinking others when I talk about these problems and they too start transitioning to do alignment work?
I answer these questions not by reasoning on ‘resemblance’ (ie. how much does it look like a doomist cult) but going into finer detail. An implicit argument being made when you call [the people who endorse the top-level post] a doomist cult is that they share the properties of other doomist cults (being wrong, having bad epistemics/policy, preying on isolated/weird minds) and are thus bad. I understand having a low prior for doomist cults look-alikes actually being right (since there is no known instance of a doomist cult of world end being right), but that’s not reason to turn into a rock (as in https://astralcodexten.substack.com/p/heuristics-that-almost-always-work?s=r , believing that “no doom prophecy is ever right”. You can’t prove that no doom prophecy is ever right, only that they’re rarely right (and probably only once).
I thus advise changing your question “do [the people who endorse the top-level post] look like a doomist cult?” into “What would be sufficient level of argument and evidence so I would take this doomist-cult-looking goup seriously?”. It’s not a bad thing to call doom when doom is on the way. Engage with the object level argument and not with your precached pattern recognition “this looks like a doom cult so is bad/not serious”. Personally, I had similar qualms as you’re expressing, but having looked into the arguments, it feels very strong and much more real to believe in “Alignement is hard and by default AGI is an existential risk” rather than not. I hope your conversation with Ben will be productive and that I haven’t only expressed points you already considered (fyi they have already been discussed on LessWrong).
I at first also downvoted because your first argument looks incredibly weak (this post has little relation to arguing for/against the difficulty of the alignment problem, what update are you getting on that from here?), as did the followup ‘all we need is...’ which is formulation which hides problems instead of solving them.
Yet, your last point does have import and that you explicitly stated that is useful in allowing everyone to address it, so I reverted to an upvote for honesty, though strong disagree.
To the point, I also want to avoid being in a doomist cult. I’m not a die hard long term “we’re doomed if don’t align AI” guy, but from my readings throughout the last year am indeed getting convinced of the urgency of the problem. Am I getting hoodwinked by a doomist cult with very persuasive rhetoric? Am I myself hoodwinking others when I talk about these problems and they too start transitioning to do alignment work?
I answer these questions not by reasoning on ‘resemblance’ (ie. how much does it look like a doomist cult) but going into finer detail. An implicit argument being made when you call [the people who endorse the top-level post] a doomist cult is that they share the properties of other doomist cults (being wrong, having bad epistemics/policy, preying on isolated/weird minds) and are thus bad. I understand having a low prior for doomist cults look-alikes actually being right (since there is no known instance of a doomist cult of world end being right), but that’s not reason to turn into a rock (as in https://astralcodexten.substack.com/p/heuristics-that-almost-always-work?s=r , believing that “no doom prophecy is ever right”. You can’t prove that no doom prophecy is ever right, only that they’re rarely right (and probably only once).
I thus advise changing your question “do [the people who endorse the top-level post] look like a doomist cult?” into “What would be sufficient level of argument and evidence so I would take this doomist-cult-looking goup seriously?”. It’s not a bad thing to call doom when doom is on the way. Engage with the object level argument and not with your precached pattern recognition “this looks like a doom cult so is bad/not serious”. Personally, I had similar qualms as you’re expressing, but having looked into the arguments, it feels very strong and much more real to believe in “Alignement is hard and by default AGI is an existential risk” rather than not. I hope your conversation with Ben will be productive and that I haven’t only expressed points you already considered (fyi they have already been discussed on LessWrong).