Though the statement doesn’t say much the list of signatories is impressively comprehensive. The only conspicuously missing names that immediately come to mind are Dean and LeCun (I don’t know if they were asked to sign).
The statement not saying much is essential for getting an impressively comprehensive list of signatories: the more you say, the more likely it is that someone whom you want to sign will disagree.
Relatedly, when we made DontDoxScottAlexander.com, we tried not to wade into a bigger fight about the NYT and other news sites, nor to make it an endorsement of Scott and everything he’s ever written/done. It just focused on the issue of not deanonymizing bloggers when revealing their identity is a threat to their careers or personal safety and there isn’t a strong ethical reason to do so. I know more high-profile people signed it because the wording was conservative in this manner.
IMO, Andrew Ng is the most important name that could have been there but isn’t. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.
Ng believes the problem is “50 years” down the track, and Yann believes that many concerns AI Safety researchers have are not legitimate. Both of them view talk about existential risks as distracting and believe we should address problems that can be seen to harm people in today’s world.
I’d say the absence of names from Facebook, Amazon, and Apple in general are worrying, as well as that there were only two from Microsoft. Apple’s absence, in particular, is what keeps me up at night.
What is it about hardware. I’ve never seen anyone from there express concern.
I wonder if it’s that, for anyone else in AI, their research is either fairly neutral—not accelerating towards AGI, or if it is in AGI, it could be repurposed towards alignment. But if your identity is rooted in hardware, if you admit to any amount of extinction risk, there’s no way for you to keep your job and stay sane?
Yann LeCun at least is very, very, loudly and repeatedly open on Twitter about considering X-risk a bunch of doomerist nonsense, so we know where he (and thus, Facebook) stands.
I mention this mainly to note that even if you get close to a consensus among experts, a newspaper website may still write a paragraph about it that gives the imporeesion that the distribution of expert opinion is completely unclear: “However, there is also disagreement in the research community. Meta’s AI chief scientist Yann LeCun, for example, who received the Turing Award together with Hinton and Bengio, has not wanted to sign any of the appeals so far. He sometimes describes the warnings as “AI doomism”” (linking to a twitter thread by LeCun).
To be clear, the statement and its coverage are very impressive.
Seems extremely likely (90%) that someone either asked them to sign or that people thought it very unlikely they would. I guess I’d go for the second. LeCun doesn’t look to want to sign something like this.
Though the statement doesn’t say much the list of signatories is impressively comprehensive. The only conspicuously missing names that immediately come to mind are Dean and LeCun (I don’t know if they were asked to sign).
The statement not saying much is essential for getting an impressively comprehensive list of signatories: the more you say, the more likely it is that someone whom you want to sign will disagree.
Relatedly, when we made DontDoxScottAlexander.com, we tried not to wade into a bigger fight about the NYT and other news sites, nor to make it an endorsement of Scott and everything he’s ever written/done. It just focused on the issue of not deanonymizing bloggers when revealing their identity is a threat to their careers or personal safety and there isn’t a strong ethical reason to do so. I know more high-profile people signed it because the wording was conservative in this manner.
IMO, Andrew Ng is the most important name that could have been there but isn’t. Virtually everything I know about machine learning I learned from him and I think there are many others for which that is true.
For anyone who wasn’t aware both Ng and LeCun have strongly indicated that they don’t believe people existential risks from AI are a priority. Summary here
You can also check out Yann’s twitter.
Ng believes the problem is “50 years” down the track, and Yann believes that many concerns AI Safety researchers have are not legitimate. Both of them view talk about existential risks as distracting and believe we should address problems that can be seen to harm people in today’s world.
He posted on a twitter a request to talk to people who feel strongly here.
I’d say the absence of names from Facebook, Amazon, and Apple in general are worrying, as well as that there were only two from Microsoft. Apple’s absence, in particular, is what keeps me up at night.
Does anyone see any hardware names?
What is it about hardware. I’ve never seen anyone from there express concern.
I wonder if it’s that, for anyone else in AI, their research is either fairly neutral—not accelerating towards AGI, or if it is in AGI, it could be repurposed towards alignment. But if your identity is rooted in hardware, if you admit to any amount of extinction risk, there’s no way for you to keep your job and stay sane?
Yann LeCun at least is very, very, loudly and repeatedly open on Twitter about considering X-risk a bunch of doomerist nonsense, so we know where he (and thus, Facebook) stands.
We don’t hear much about Apple in AI—curious why you rank them so important.
Here is the coverage on the “most frequently quoted online media product in Germany”: Spiegel.de
I mention this mainly to note that even if you get close to a consensus among experts, a newspaper website may still write a paragraph about it that gives the imporeesion that the distribution of expert opinion is completely unclear: “However, there is also disagreement in the research community. Meta’s AI chief scientist Yann LeCun, for example, who received the Turing Award together with Hinton and Bengio, has not wanted to sign any of the appeals so far. He sometimes describes the warnings as “AI doomism”” (linking to a twitter thread by LeCun).
To be clear, the statement and its coverage are very impressive.
Seems extremely likely (90%) that someone either asked them to sign or that people thought it very unlikely they would. I guess I’d go for the second. LeCun doesn’t look to want to sign something like this.