This is much better than any of his other speaking appearances. The short format, and TED’s excellent talk editing/coaching, have really helped.
This is still terrible.
I thought it was a TEDx talk, and I thought it was perhaps the worst TEDx talk I’ve seen. (I agree that it’s rare to see a TEDx talk with good content, but the deliveries are usually vastly better than this).
I love Eliezer Yudkowsky. He is the reason I’m in this field, and I think he’s one of the smartest human beings alive. He is also one of the best-intentioned people I know. This is not a critique of Yudkowsky as an individual.
He is not a good public speaker.
I’m afraid having him as the public face of the movement is going to be devastating. The reactions I see to his public statements indicate that he is creating polarization. His approach makes people want to find reasons to disagree with him. And individuals motivated to do that will follow their confirmation bias to focus on counterarguments.
I realize that he had only a few days to prepare this. That is not the problem. The problem is a lack of public communication skills. Those are very different than communicating with your in-group.
Yudkowsky should either level up his skills, rapidly, or step aside.
There are many others with more talent and skills for this type of communication.
Eliezer is rapidly creating polarization around this issue, and that is very difficult to undo. We don’t have time to do that.
Could we bull through with this approach, and rely on the strength of the arguments to win over public opinion? That might work. But doing that instead of actually thinking about strategy and developing skills would hurt our odds of survival, perhaps rather badly.
I’ve been afraid to say this in this community. I think it needs to be said.
I’m not sure I agree. Consider the reaction of the audience to this talk- uncomfortable laughter, but also a pretty enthusiastic standing ovation. I’d guess that latter happened because the audience saw Eliezer as genuine- he displayed raw emotion, spoke bluntly, and at no point came across as someone making a play for status. He fit neatly into the “scientist warning of disaster” archetype, which isn’t a figure that’s expected to be particularly skilled at public communication.
A more experienced public speaker would certainly be able to present the ideas in a more high-status way- and I’m sure there would be a lot of value in that. But the goal of increasing the status of the ideas might to some degree trade off against communicating their seriousness- a person skillfully arguing a high-status idea has a potential ulterior motive that someone like Eliezer clearly doesn’t. To get the same sort of reception from an audience that Eliezer got in this talk, a more experienced speaker might need to intentionally present themselves as lacking polish, which wouldn’t necessarily be the best way to use their talents.
Better, maybe, to platform both talented PR people and unpolished experts.
This is an excellent point. This talk didn’t really sound condescending, as every other presentation I’ve seen from him did. Condescension and other signs of disrespect are what create polarization. So perhaps it’s that simple, and he doesn’t need to skill up further.
I suspect he does need to skill up to avoid sounding hostile and condescending in conversation, though. The short talk format with practice and coaching may have fixed the real problems.
I agree that sounding unpolished might be perfectly fine.
I’m with you on this. I think Yudkowsky was a lot better in this with his more serious tone, but even so, we need to look for better.
Popular scientific educators would be a place to start and I’ve thought about sending out a million emails to scientifically minded educators on YouTube, but even that doesn’t feel like the best solution to me.
The sort of people that are listened to are the more political types, so they I think are the people to reach out to. You might say they need to understand the science to talk about it, but I’d still put more weight on charisma vs. scientific authority.
Anyone have any ideas on how to get people like this on board?
I just read your one post. I agree with it. We need more people on board. We are getting that, but finding more people with more PR skills would seem like a good idea.
I think the starting point is finding people who are already part of this community who are interested in brainstorming about PR strategy. To that end, I’m writing a post on this topic.
Getting charismatic “political types” to weigh in is unlikely to help with “polarization.” That’s what happened with global warming climate change.
A more effective strategy might be to lean into the polarization: make “AI safety” an issue of tribal identity, which members will support reflexively against enemies. That might delay technological advancement for long enough.
It seems like polarization will prevent public policy changes. If half of the experts think that regulation is a terrible idea, how would governments decide to regulate? Worse yet, if some AI and corporate types are on the other side of polarization, they will charge full speed ahead as a fuck-you to the irritating doomers.
I think there’s a lot we could learn from climate change activists. Having a tangible ‘bad guy’ would really help, so maybe we should be framing it more that way.
“The greedy corporations are gambling with our lives to line their pockets.”
“The governments are racing towards AI to win world domination, and Russia might win.”
“AI will put 99% of the population out of work forever and we’ll all starve.”
And a better way to frame the issue might be “Bad people using AI” as opposed to “AI will kill us”.
If anyone knows of any groups working towards a major public awareness campaign, please let the rest of us know about it. Or maybe we should start our own.
There’s a catch-22 here where the wording will put people off if it’s too extreme because they’ll just put all doomsayers in one boat whether the fears are over AI, UFOs or cthulu and then dismiss them equally. (It’s like there’s a tradeoff between level of alarm and credibility).
And on the other hand claims will also be dismissed if the perceived danger is toned down in the wording. The best way to get the message across, in my opinion, is to either have more influential people spread the message (as previously recommended) or organize focus testing on what parts of the message people don’t understand and workshop how to get it across. If I had to take a crack at how to structure a clear, persuasive message my intuition is that the best way to word this message is to explain the current environment, current AI capabilities and specific timeline and then let the reader work out the implications.
Examples
‘Nearly 80% of the labor force works in service jobs and current AI technology can do most of those jobs. In ~5 years AI workers could be more proficient and economical than humans.’
‘It’s impossible to know what a machine is thinking. In running large language model based AI researchers don’t know exactly what they’re looking at until they analyze the metrics. Within 10-30 years an AI could reach a super intelligent level and it wouldn’t be immediately apparent.’
The reactions I see to his public statements indicate that he is creating polarization.
I had the opposite impression, from this video and in general: that Yudkowsky is good at avoiding polarizing statements, while still not compromising on saying what he actually thinks. Compare him with Hinton, who throws around clearly politically coded statements.
Do you have a control to infer he’s polarizing? I suspect you are looking at a confounded effect.
It sounds like you’re referring to political polarization. I’m referring to a totally different type of polarization, purely on the issue of AI development.
My evidence that Yudkowsky in particular is creating polarization is hearing his statements referred to frequently by other commentators, with apparent negative emotions. There is a potential confound here in that Yudkowsky is the loudest voice. However, I think what I’m observing is stronger than that. Other safety advocates who’ve gotten real press, like Hinton, Tegmark, Hawking, etc. etc. frame the argument in more general and less strident ways, and I have not heard their statements used as examples by people who sound emotionally charged on the opposite side.
That’s why it sounds to me that Yudkowsky is making a systematic communication mistake that creates emotionally charged opposition to his views. Which is a big problem if it’s true.
It seems to me like AGI risk needs a “zeitgeist addendum” / “venus project” style movie for the masses. Open up the overton window and touch on things like mesa optimization without boring the average person to death.
The /r/controlproblem faq is the most succinct summary i’ve seen but I couldn’t get the majority of average folks to read that if I tried and it would still go over their heads.
I love robert miles but he suffers from the same problem as elizer or say connor leahy. Not a radio voice. Not a movie face. Also his existing videos are “deep dive” style.
You need to be able to introduce the overall problem, the reasons / deductions on why and how its problematic. Address the obvious pushback (which the reddit control problem faq does well) and then introduce the more “intelligentsia” concepts like “mesa optimization” in an easily digestible manner for a population with an average reading comprehension of a 6th grade level and a 20 second attention span.
So you could work off of Robert miles videos but they need to fit into a narrative / storytelling format. Beggining, middle and end. The end should be basically where were all at “we’re probably all screwed but it doesn’t mean we can’t try” and then actionable advise (which should be sprinkled throughout the film, that’s foreshadowing)
Regarding that documentary , I see a major flaw as drifting off into specifics like killer drones. The media has already primed peoples imaginations for lots of the specific ways x risk or s risk might plan out (matrix trilogy , black mirror etc). You could go down an entire rabbot hole on just nano tech or bioweapons. IMO you sprinkle those about to keep the audience engaged (and so that the takeaway isn’t just “something something paperclips”) but driving into them too much grts you lost in the weeds.
For example , I foresaw the societal problems of deepfakes but the way its actually played out (mass distributed powerful llm’s people can diy with) coupled with the immediacy of the employment problem introeuces entire new vectors in social cohesion as problems I hadn’t thought through at all. So , better to broadly introduce individual danger scenarios while keeping the narrative focused on the value alignment / control problems themselves.
Thanks, I’ve read that FAQ but I’ll check it out again.
A good documentary might very well be an important step. I’m not familiar with your example films. I don’t really like the idea of fictionalizing the arguments since that’s an obviously sketchy way of making your points. However, if one was done in detail with really realistic portrayals of the problems and a very plausible path to AGI ruin, it might be really useful… unfortunately, Hollywood does not traffic in detail and realism by default, so I wouldn’t get my hopes up on that.
Right right. It doesn’t need to be finctionalized , just a kind of fun documentary. The key is , this stuff is not interesting for most folks. Mesa optimization sounds like a snore.
You have to be able to walk the audience through it in ane engaging way.
You should pull them up on youtube or whatever and then just jump around (sound off is fine) , the film maker is independent. I’m not saying that particular producer / film maker is the go to but the “style” and “tone” and overall storytelling fits the theme.
“Serious documentary about the interesting thing you never heard about” , also this was really popular with young adults when it came out, it caught the flame of a group of young Americans who came of age during 9/11 and the middle east invasions and sort of shaped up what became the occupy wall street movement. Now, that’s probably not exactly the demographic you want to target, most of them are tech savvy enough that they’ll stumble upon this on their own (although they do need something digestible) but broadly speaking it seems to me like having a cultural “phenomenon” that brings this more into the mainstream and introduces the main takeaways or concepts is a must have project for our efforts.
Ok well. Lets forget that exact example (which I now admit having not seen in almost twenty years)
I think we need a narrarive style film / docudrama. Beggining , middle , end. Story driven.
1.) Introduces the topic.
2.) Expands on it and touches on concepts
3.) Explains them in an ELI5 manner.
And that it should include all the relevant things like value alignment , control , inner and outer alignment etc without “losing” the audience.
Similarly if its going to touch on niche examples of x-risk or s-risk it should just “wet the imagination” without pulling down the entire edifice and losing the forest for the trees.
I think this is a format that is more likely to be engaged by a wider swathe of persons , I think (as I stated elsewhere in this thread) that rob miles , yudkowski and a large number of other AI experts can be quoted or summarized but do not offer the tonality / charisma to keep an audience engaged.
Think “attenborough” and the planet earth series.
It also seems sensible to me to kind of meld socratic questioning / rationality to bring the audience into the fold in terms of the deductive reasoning leading to the conclusions vs just outright feeding it to them upfront. Its going to be very hard to make a popular movie thst essentially promises catastophe. However if the narrator is asking the audience as it goes along “now , given the alien nature of the intelligence, why would it share human values? , imagine for a moment what it wpuld be like to be a bat...” then when you get to thr summary points any audience member with an iq above 80 is already halfway or more to the point independantly.
Thats what I like about the reddit controlproblem faq , it touches on all the basic superficial / kneejerk questions anyone who hasnt read like all of “superintelligence” would have when casually introduced to this.
This is much better than any of his other speaking appearances. The short format, and TED’s excellent talk editing/coaching, have really helped.
This is still terrible.
I thought it was a TEDx talk, and I thought it was perhaps the worst TEDx talk I’ve seen. (I agree that it’s rare to see a TEDx talk with good content, but the deliveries are usually vastly better than this).
I love Eliezer Yudkowsky. He is the reason I’m in this field, and I think he’s one of the smartest human beings alive. He is also one of the best-intentioned people I know. This is not a critique of Yudkowsky as an individual.
He is not a good public speaker.
I’m afraid having him as the public face of the movement is going to be devastating. The reactions I see to his public statements indicate that he is creating polarization. His approach makes people want to find reasons to disagree with him. And individuals motivated to do that will follow their confirmation bias to focus on counterarguments.
I realize that he had only a few days to prepare this. That is not the problem. The problem is a lack of public communication skills. Those are very different than communicating with your in-group.
Yudkowsky should either level up his skills, rapidly, or step aside.
There are many others with more talent and skills for this type of communication.
Eliezer is rapidly creating polarization around this issue, and that is very difficult to undo. We don’t have time to do that.
Could we bull through with this approach, and rely on the strength of the arguments to win over public opinion? That might work. But doing that instead of actually thinking about strategy and developing skills would hurt our odds of survival, perhaps rather badly.
I’ve been afraid to say this in this community. I think it needs to be said.
I’m not sure I agree. Consider the reaction of the audience to this talk- uncomfortable laughter, but also a pretty enthusiastic standing ovation. I’d guess that latter happened because the audience saw Eliezer as genuine- he displayed raw emotion, spoke bluntly, and at no point came across as someone making a play for status. He fit neatly into the “scientist warning of disaster” archetype, which isn’t a figure that’s expected to be particularly skilled at public communication.
A more experienced public speaker would certainly be able to present the ideas in a more high-status way- and I’m sure there would be a lot of value in that. But the goal of increasing the status of the ideas might to some degree trade off against communicating their seriousness- a person skillfully arguing a high-status idea has a potential ulterior motive that someone like Eliezer clearly doesn’t. To get the same sort of reception from an audience that Eliezer got in this talk, a more experienced speaker might need to intentionally present themselves as lacking polish, which wouldn’t necessarily be the best way to use their talents.
Better, maybe, to platform both talented PR people and unpolished experts.
This is an excellent point. This talk didn’t really sound condescending, as every other presentation I’ve seen from him did. Condescension and other signs of disrespect are what create polarization. So perhaps it’s that simple, and he doesn’t need to skill up further.
I suspect he does need to skill up to avoid sounding hostile and condescending in conversation, though. The short talk format with practice and coaching may have fixed the real problems.
I agree that sounding unpolished might be perfectly fine.
I’m with you on this. I think Yudkowsky was a lot better in this with his more serious tone, but even so, we need to look for better.
Popular scientific educators would be a place to start and I’ve thought about sending out a million emails to scientifically minded educators on YouTube, but even that doesn’t feel like the best solution to me.
The sort of people that are listened to are the more political types, so they I think are the people to reach out to. You might say they need to understand the science to talk about it, but I’d still put more weight on charisma vs. scientific authority.
Anyone have any ideas on how to get people like this on board?
I just read your one post. I agree with it. We need more people on board. We are getting that, but finding more people with more PR skills would seem like a good idea.
I think the starting point is finding people who are already part of this community who are interested in brainstorming about PR strategy. To that end, I’m writing a post on this topic.
Getting charismatic “political types” to weigh in is unlikely to help with “polarization.” That’s what happened with
global warmingclimate change.A more effective strategy might be to lean into the polarization: make “AI safety” an issue of tribal identity, which members will support reflexively against enemies. That might delay technological advancement for long enough.
It seems like polarization will prevent public policy changes. If half of the experts think that regulation is a terrible idea, how would governments decide to regulate? Worse yet, if some AI and corporate types are on the other side of polarization, they will charge full speed ahead as a fuck-you to the irritating doomers.
I think there’s a lot we could learn from climate change activists. Having a tangible ‘bad guy’ would really help, so maybe we should be framing it more that way.
“The greedy corporations are gambling with our lives to line their pockets.”
“The governments are racing towards AI to win world domination, and Russia might win.”
“AI will put 99% of the population out of work forever and we’ll all starve.”
And a better way to frame the issue might be “Bad people using AI” as opposed to “AI will kill us”.
If anyone knows of any groups working towards a major public awareness campaign, please let the rest of us know about it. Or maybe we should start our own.
There’s a catch-22 here where the wording will put people off if it’s too extreme because they’ll just put all doomsayers in one boat whether the fears are over AI, UFOs or cthulu and then dismiss them equally. (It’s like there’s a tradeoff between level of alarm and credibility).
And on the other hand claims will also be dismissed if the perceived danger is toned down in the wording. The best way to get the message across, in my opinion, is to either have more influential people spread the message (as previously recommended) or organize focus testing on what parts of the message people don’t understand and workshop how to get it across. If I had to take a crack at how to structure a clear, persuasive message my intuition is that the best way to word this message is to explain the current environment, current AI capabilities and specific timeline and then let the reader work out the implications.
Examples
‘Nearly 80% of the labor force works in service jobs and current AI technology can do most of those jobs. In ~5 years AI workers could be more proficient and economical than humans.’
‘It’s impossible to know what a machine is thinking. In running large language model based AI researchers don’t know exactly what they’re looking at until they analyze the metrics. Within 10-30 years an AI could reach a super intelligent level and it wouldn’t be immediately apparent.’
I had the opposite impression, from this video and in general: that Yudkowsky is good at avoiding polarizing statements, while still not compromising on saying what he actually thinks. Compare him with Hinton, who throws around clearly politically coded statements.
Do you have a control to infer he’s polarizing? I suspect you are looking at a confounded effect.
It sounds like you’re referring to political polarization. I’m referring to a totally different type of polarization, purely on the issue of AI development.
My evidence that Yudkowsky in particular is creating polarization is hearing his statements referred to frequently by other commentators, with apparent negative emotions. There is a potential confound here in that Yudkowsky is the loudest voice. However, I think what I’m observing is stronger than that. Other safety advocates who’ve gotten real press, like Hinton, Tegmark, Hawking, etc. etc. frame the argument in more general and less strident ways, and I have not heard their statements used as examples by people who sound emotionally charged on the opposite side.
That’s why it sounds to me that Yudkowsky is making a systematic communication mistake that creates emotionally charged opposition to his views. Which is a big problem if it’s true.
It seems to me like AGI risk needs a “zeitgeist addendum” / “venus project” style movie for the masses. Open up the overton window and touch on things like mesa optimization without boring the average person to death.
The /r/controlproblem faq is the most succinct summary i’ve seen but I couldn’t get the majority of average folks to read that if I tried and it would still go over their heads.
There is this documentary: https://en.wikipedia.org/wiki/Do_You_Trust_This_Computer%3F Probably not quite what you want. Maybe the existing videos of Robert Miles (on Mesa-Optimization and other things) would be better than a full documentary.
I love robert miles but he suffers from the same problem as elizer or say connor leahy. Not a radio voice. Not a movie face. Also his existing videos are “deep dive” style.
You need to be able to introduce the overall problem, the reasons / deductions on why and how its problematic. Address the obvious pushback (which the reddit control problem faq does well) and then introduce the more “intelligentsia” concepts like “mesa optimization” in an easily digestible manner for a population with an average reading comprehension of a 6th grade level and a 20 second attention span.
So you could work off of Robert miles videos but they need to fit into a narrative / storytelling format. Beggining, middle and end. The end should be basically where were all at “we’re probably all screwed but it doesn’t mean we can’t try” and then actionable advise (which should be sprinkled throughout the film, that’s foreshadowing)
Regarding that documentary , I see a major flaw as drifting off into specifics like killer drones. The media has already primed peoples imaginations for lots of the specific ways x risk or s risk might plan out (matrix trilogy , black mirror etc). You could go down an entire rabbot hole on just nano tech or bioweapons. IMO you sprinkle those about to keep the audience engaged (and so that the takeaway isn’t just “something something paperclips”) but driving into them too much grts you lost in the weeds.
For example , I foresaw the societal problems of deepfakes but the way its actually played out (mass distributed powerful llm’s people can diy with) coupled with the immediacy of the employment problem introeuces entire new vectors in social cohesion as problems I hadn’t thought through at all. So , better to broadly introduce individual danger scenarios while keeping the narrative focused on the value alignment / control problems themselves.
Thanks, I’ve read that FAQ but I’ll check it out again.
A good documentary might very well be an important step. I’m not familiar with your example films. I don’t really like the idea of fictionalizing the arguments since that’s an obviously sketchy way of making your points. However, if one was done in detail with really realistic portrayals of the problems and a very plausible path to AGI ruin, it might be really useful… unfortunately, Hollywood does not traffic in detail and realism by default, so I wouldn’t get my hopes up on that.
Right right. It doesn’t need to be finctionalized , just a kind of fun documentary. The key is , this stuff is not interesting for most folks. Mesa optimization sounds like a snore.
You have to be able to walk the audience through it in ane engaging way.
I’d go with “a bunch of weird stuff might happen. That might kill us all, because of instrumental convergence...
You should pull them up on youtube or whatever and then just jump around (sound off is fine) , the film maker is independent. I’m not saying that particular producer / film maker is the go to but the “style” and “tone” and overall storytelling fits the theme.
“Serious documentary about the interesting thing you never heard about” , also this was really popular with young adults when it came out, it caught the flame of a group of young Americans who came of age during 9/11 and the middle east invasions and sort of shaped up what became the occupy wall street movement. Now, that’s probably not exactly the demographic you want to target, most of them are tech savvy enough that they’ll stumble upon this on their own (although they do need something digestible) but broadly speaking it seems to me like having a cultural “phenomenon” that brings this more into the mainstream and introduces the main takeaways or concepts is a must have project for our efforts.
I googled “Zeitgeist Addendum” and it does not seem to be a thing that would be useful for AGI risk.
is a followup movie of a 9/11 conspiracy movie
has some naive economic ideas (like abolishing money would fix a lot of issues)
the venus project appears to not be very successful
Do you claim the movie had any great positive impact or presented any new, true, and important ideas?
Ok well. Lets forget that exact example (which I now admit having not seen in almost twenty years)
I think we need a narrarive style film / docudrama. Beggining , middle , end. Story driven.
1.) Introduces the topic.
2.) Expands on it and touches on concepts
3.) Explains them in an ELI5 manner.
And that it should include all the relevant things like value alignment , control , inner and outer alignment etc without “losing” the audience.
Similarly if its going to touch on niche examples of x-risk or s-risk it should just “wet the imagination” without pulling down the entire edifice and losing the forest for the trees.
I think this is a format that is more likely to be engaged by a wider swathe of persons , I think (as I stated elsewhere in this thread) that rob miles , yudkowski and a large number of other AI experts can be quoted or summarized but do not offer the tonality / charisma to keep an audience engaged.
Think “attenborough” and the planet earth series.
It also seems sensible to me to kind of meld socratic questioning / rationality to bring the audience into the fold in terms of the deductive reasoning leading to the conclusions vs just outright feeding it to them upfront. Its going to be very hard to make a popular movie thst essentially promises catastophe. However if the narrator is asking the audience as it goes along “now , given the alien nature of the intelligence, why would it share human values? , imagine for a moment what it wpuld be like to be a bat...” then when you get to thr summary points any audience member with an iq above 80 is already halfway or more to the point independantly.
Thats what I like about the reddit controlproblem faq , it touches on all the basic superficial / kneejerk questions anyone who hasnt read like all of “superintelligence” would have when casually introduced to this.