Do you think that the salons he held included leaders in the AI safety field or just people in his normal circles who read a magazine article or two?
Perhaps you’re just being facetious, but I think “people… who read a magazine article or two” underestimates the kind of person who would be in Henry Kissinger’s circles.
And, last year, Henry Kissinger jumped on the peril bandwagon, holding a confidential meeting with top A.I. experts at the Brook, a private club in Manhattan, to discuss his concern over how smart robots could cause a rupture in history and unravel the way civilization works.
My guess is that he consulted the people who seemed like the obvious experts to consult, namely AI experts. And that these may or may not have included people who were up-to-date on the subfield of AI Safety (which would have been more obscure in 2016 when these meetings were taking place).
Based on the content of the initial op-ed, I am confident in my assertion.
Based on long familiarity with Kissinger’s work, he knows that not even he is immune to the dunning-kruger effect and takes steps to mitigate it. I assess that this op-ed was written after an extremely credible effort to inform himself on the state of the field of AI. Unfortunately, based on my analysis of the content of the op-ed, that effort either failed to identify the AI safety community of practice, or determined that its current outputs were not worth detailed attention.
Kissinger’s eminence is unquestionable, so the fact that up to date ideas about AI safety were not included is indicative of a problem with the state of the AI safety / x risk community of practice’s ability to show relevance to people who can actually take meaningful action based on its’ conclusions.
If your primary concern in life is x risk from technology, and the guy who literally once talked the military into ‘waiting until the President sobered up’ to launch the nuclear war the president ordered is either unaware of your work, or doesn’t view it as useful, either you have not effectively marketed yourself, or your work is not useful.
Oh, I had a very different read on this. In this article, Kissinger (or possibly people ghost-writing for him), seemed remarkably clear on several of the most important bits of AI safety. I think it’s unlikely he’d have run into those bits if he *hadn’t* ended up talking to people involved with actual AI safety.
I currently think it more-likely-than-not that he’s spoken to some combination of Stuart Russell, Max Tegmark and/or Nick Bostrom.
(This is based in part on some background knowledge that some of those people have gotten to talk to heads of state before).
The fact that he doesn’t drill into the latest gritty details (either of the MIRI camp, the OpenAI camp, or anyone else), or mention any specific organizations, strikes me as having way less to do with how informed he is, and way more to do with his goals for the article (which is more to lend his cred to the basic ideas behind AI risk, and to build momentum towards some kind of intervention. As noted elsewhere I’m cautious about government involvement, but if you take that goal at face value, I think this article basically hits the notes I’d want it to hit)
(My guess is he’s not fully informed on everything, just because there’s a lot to be fully informed on, but the degree to which he’s showing an understanding of the issue here has me relatively happy – expecting that when it comes time to Actually Policy Wonk on this, that he’d connected with the right people and make at least a better-than-average* effort to be informed)
*this is not a claim that better-than-average would be good enough, just, good enough that it doesn’t feel correct to draw the conclusion that the AI Safety community has utterly failed at marketing.
See below
Perhaps you’re just being facetious, but I think “people… who read a magazine article or two” underestimates the kind of person who would be in Henry Kissinger’s circles.
In fact:
My guess is that he consulted the people who seemed like the obvious experts to consult, namely AI experts. And that these may or may not have included people who were up-to-date on the subfield of AI Safety (which would have been more obscure in 2016 when these meetings were taking place).
Based on the content of the initial op-ed, I am confident in my assertion.
Based on long familiarity with Kissinger’s work, he knows that not even he is immune to the dunning-kruger effect and takes steps to mitigate it. I assess that this op-ed was written after an extremely credible effort to inform himself on the state of the field of AI. Unfortunately, based on my analysis of the content of the op-ed, that effort either failed to identify the AI safety community of practice, or determined that its current outputs were not worth detailed attention.
Kissinger’s eminence is unquestionable, so the fact that up to date ideas about AI safety were not included is indicative of a problem with the state of the AI safety / x risk community of practice’s ability to show relevance to people who can actually take meaningful action based on its’ conclusions.
If your primary concern in life is x risk from technology, and the guy who literally once talked the military into ‘waiting until the President sobered up’ to launch the nuclear war the president ordered is either unaware of your work, or doesn’t view it as useful, either you have not effectively marketed yourself, or your work is not useful.
Oh, I had a very different read on this. In this article, Kissinger (or possibly people ghost-writing for him), seemed remarkably clear on several of the most important bits of AI safety. I think it’s unlikely he’d have run into those bits if he *hadn’t* ended up talking to people involved with actual AI safety.
I currently think it more-likely-than-not that he’s spoken to some combination of Stuart Russell, Max Tegmark and/or Nick Bostrom.
(This is based in part on some background knowledge that some of those people have gotten to talk to heads of state before).
The fact that he doesn’t drill into the latest gritty details (either of the MIRI camp, the OpenAI camp, or anyone else), or mention any specific organizations, strikes me as having way less to do with how informed he is, and way more to do with his goals for the article (which is more to lend his cred to the basic ideas behind AI risk, and to build momentum towards some kind of intervention. As noted elsewhere I’m cautious about government involvement, but if you take that goal at face value, I think this article basically hits the notes I’d want it to hit)
(My guess is he’s not fully informed on everything, just because there’s a lot to be fully informed on, but the degree to which he’s showing an understanding of the issue here has me relatively happy – expecting that when it comes time to Actually Policy Wonk on this, that he’d connected with the right people and make at least a better-than-average* effort to be informed)
*this is not a claim that better-than-average would be good enough, just, good enough that it doesn’t feel correct to draw the conclusion that the AI Safety community has utterly failed at marketing.