the last few times people tried naming this thing, people shifted to using it in a more generic way that didn’t engage with the primary cruxes of the original namers
Yes, but, that’s because:
“AI Safety” and “AI Alignment” aren’t sufficiently specific names, and I think you really can’t complain when those names end up getting used to mean things other than existential safety
(Which I agree with you about.)
the word is both specific enough and sounds low-status-enough that you can’t possibly try to redefine it in a vague applause-lighty way that people will end up Safetywashing
OK, but now it’s being used on (eg) Twitter as an applause light for people who already agree with Eliezer, and the net effect of that is negative. Either it’s used internally in places like LessWrong, where it’s unnecessary, or it’s used in public discourse, where it sounds dumb which makes it counterproductive.
And, sure, there should also be a name that is also, like, prestigious and reasonable sounding and rolls off the tongue. But most of the obvious words are kind a long and a mouthful and are likely to have syllables dropped for convenience
Yes, that’s what I’m trying to make a start on getting done.
as a joke-name, things went overboard and it’s getting used more often than it should
Yes, that is what I think. Here’s a meme account on Twitter. Here’s Zvi using it. These are interfaces to people who largely think it sounds dumb.
I agree it’s getting used publicly. And, to be clear, I don’t have a that strong an opinion on this, I’m not defending the phrase super hard. But, you haven’t actually justified that a bad thing is definitely happening from my perspective.
Some people on the internet think a thing sounds dumb, sure. The thing is that pushing an overton window basically always has people laughing at you and thinking you’re dumb, regardless. People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased.
The goal here (on the part of the people saying the phrase) is not “build the biggest tent”, nor is it “minimize sounding dumb”. It’s “speak plainly and actually convey a particular really bad thing that is likely to happen. Ensure enough / the-right people to notice that an actual really bad thing is likely to happen, which people don’t gloss over and minimize.”
Your post presumes “we’re trying to build a big tent movement, and it should include things other than AI killing everyone.” But, that’s in fact we spent several years where most of the public messaging was big-tent-ish. And it seemed like this did not actually succeed strategically.
Put another way – I agree that maybe it’s correct to not sound dumb here. But I absolutely think you need to be willing to sound dumb, if that turns out to be the correct strategy. When I see posts like this I think they are often driven by a generator that is not actually about optimizing for winning at a strategic goal, but about avoiding social stigma (which is a very scary thing).
(I think there are counter-problems within the LW sphere of being too willing to be contrarian and edgy. But you currently haven’t done any work to justify that the problem here is being too edgy rather than not enough)
(Meanwhile I super endorse trying to come up with non-dumb-sounding things that actually achieve the various goals. But, note that the people-saying-AI-notkilleveryonism are specifically NOT optimizing for “build the biggest tent”)
People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased.
No, you’re dead wrong here. Polls show widespread popular concern about AI developments. You should not give up on “not seeming like a weird silly outlandish doomer cult”. If you want to actually get things done, you cannot give up on that.
Hmm. So I do agree the recent polls that showed support for “generally worried” and “the Pause open letter” are an important strategic consideration here. I do think it’s fairly reasonable to argue “look man you actually have the public support, please don’t fuck it up.”
So, thank you for bringing that up.
It still feels like it’s not actually a counterargument to the particular point I was making – I do think there are (many) people who respond to taking AI extinction risk seriously with ridicule, no matter how carefully it’s phrased. So if you’re just running the check of “did anyone respond negatively to this?” the check will basically always return “yes”, and it takes a more careful look at the situation to figure out what kind of communications strategy actually works.
Yes, but, that’s because:
(Which I agree with you about.)
OK, but now it’s being used on (eg) Twitter as an applause light for people who already agree with Eliezer, and the net effect of that is negative. Either it’s used internally in places like LessWrong, where it’s unnecessary, or it’s used in public discourse, where it sounds dumb which makes it counterproductive.
Yes, that’s what I’m trying to make a start on getting done.
Yes, that is what I think. Here’s a meme account on Twitter. Here’s Zvi using it. These are interfaces to people who largely think it sounds dumb.
I agree it’s getting used publicly. And, to be clear, I don’t have a that strong an opinion on this, I’m not defending the phrase super hard. But, you haven’t actually justified that a bad thing is definitely happening from my perspective.
Some people on the internet think a thing sounds dumb, sure. The thing is that pushing an overton window basically always has people laughing at you and thinking you’re dumb, regardless. People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased.
The goal here (on the part of the people saying the phrase) is not “build the biggest tent”, nor is it “minimize sounding dumb”. It’s “speak plainly and actually convey a particular really bad thing that is likely to happen. Ensure enough / the-right people to notice that an actual really bad thing is likely to happen, which people don’t gloss over and minimize.”
Your post presumes “we’re trying to build a big tent movement, and it should include things other than AI killing everyone.” But, that’s in fact we spent several years where most of the public messaging was big-tent-ish. And it seemed like this did not actually succeed strategically.
Put another way – I agree that maybe it’s correct to not sound dumb here. But I absolutely think you need to be willing to sound dumb, if that turns out to be the correct strategy. When I see posts like this I think they are often driven by a generator that is not actually about optimizing for winning at a strategic goal, but about avoiding social stigma (which is a very scary thing).
(I think there are counter-problems within the LW sphere of being too willing to be contrarian and edgy. But you currently haven’t done any work to justify that the problem here is being too edgy rather than not enough)
(Meanwhile I super endorse trying to come up with non-dumb-sounding things that actually achieve the various goals. But, note that the people-saying-AI-notkilleveryonism are specifically NOT optimizing for “build the biggest tent”)
No, you’re dead wrong here. Polls show widespread popular concern about AI developments. You should not give up on “not seeming like a weird silly outlandish doomer cult”. If you want to actually get things done, you cannot give up on that.
Hmm. So I do agree the recent polls that showed support for “generally worried” and “the Pause open letter” are an important strategic consideration here. I do think it’s fairly reasonable to argue “look man you actually have the public support, please don’t fuck it up.”
So, thank you for bringing that up.
It still feels like it’s not actually a counterargument to the particular point I was making – I do think there are (many) people who respond to taking AI extinction risk seriously with ridicule, no matter how carefully it’s phrased. So if you’re just running the check of “did anyone respond negatively to this?” the check will basically always return “yes”, and it takes a more careful look at the situation to figure out what kind of communications strategy actually works.
I think we’re on the same page here. Sorry if I was overly aggressive there, I just have strong opinions on that particular subtopic.