Pamphlets work for wells in Africa. They don’t work for MIRI’s mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
Eliezer spent SIAI’s early years appealing directly to people about AI. Some good people found him, but the people were being filtered for “interest in future technology” rather than “able to think,” and thus when Eliezer would make basic arguments about e.g. the orthogonality thesis or basic AI drives, the responses he would get were basically random (except for the few good people). So Eliezer wrote The Sequences and HPMoR and now the filter is “able to think” or at least “interest in improving one’s thinking,” and these people, in our experience, are much more likely to do useful things when we present the case for EA, for x-risk reduction, for FAI research, etc.
Still, we keep trying direct mission appeals, to some extent. I’ve given my standard talk, currently titled “Effective Altruism and Machine Intelligence,” at Quixey, Facebook, and Heroku. This talk explains effective altruism, astronomical stakes, the x-risk landscape, and the challenge of FAI, all in 25 minutes. I don’t know yet how much good effect this talk will have. There’s Facing the Intelligence Explosion and the forthcoming Smarter Than Us. I’ve spent a fair amount of time promoting Our Final Invention.
I don’t think we can get much of anywhere with a 1-page pamphlet, though. We tried a 4-page pamphlet once; it accomplished nothing.
Pamphlets work for wells in Africa. They don’t work for MIRI’s mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
Didn’t you get convinced about AI risk by reading a short paragraph of I. J. Good?
Certainly there exist people who will be pushed to useful action by a pamphlet. They’re fairly common for wells in Africa, and rare for risks from self-improving AI. To get 5 “hits” with well pamphlets, you’ve got to distribute maybe 1000 pamphlets. To get 5 hits with self-improving AI pamphlets, you’ve got to distribute maybe 100,000 pamphlets. Obviously you should be able to target the pamphlets better than that, but then distribution and planning costs are a lot higher, and the cost per New Useful Person look higher to me on that plan than distributing HPMoR to leading universities and tech companies, which is a plan for which we already have good evidence of effectiveness, and which we are therefore doing.
But MIRI’s ideas have now influenced the mainstream. Since 2011 we have had Norvig & Russell, Barrat, etc, providing some proof by authority and social proof.
The next step is not to popularize the ideas to a mass audience, but to continue targeting the relevant elite audience, e.g. Gary Marcus (not that he really gets it).
HPMOR has had some success at reaching the younger and more flexible of these, but bringing some more senior people on board will allow the junior researchers to work on MIRI-style work without ruining their careers—as-is, some are doing it as a part-time hobby during a PhD on another topic, which is a precarious situation.
MIRI is actually having some success at this. It seems that this audience can now be targeted with a decent chance of success and high value for that success.
Here I am talking about the academic community, but the forward-thinking tech-millionaire community is a harder nut to crack and probably needs a separate plan.
I would hesitate to use failure during “SIAI’s early years” to justify the ease or difficulty of the task. First, the organization seems far more capable now than it was at the time. Second, the landscape has shifted dramatically even in the last few years. Limited AI is continuing to expand and with it discussion of the potential impacts (most of it ill-informed, but still).
While I share your skepticism about pamphlets as such, I do tend to think that MIRI has a greater chance of shifting the odds away from UFAI with persuasion/education rather than trying to build an FAI or doing mathematical research.
I agree and would also add that “Eliezer failed in 2001 to convince many people” does not imply “Eliezer in 2013 is incapable of persuading people”. From his writings, I understand he has changed his views considerably in the last dozen years.
Who says the speculation of potential impacts is damagingly ill-informed? Just because people think of “AI” and then jump to “robots” and then “robots who are used to replace workers, destroy all our jobs, and then rise up in revolution as a robotic resurrection of Communism” doesn’t mean they’re not correctly reasoning that the creation of AI is dangerous.
Thanks, Luke. This is an informative reply, and it’s great to hear you have a standard talk! Is it publicly available, and where can I see it if so? Maybe MIRI should ask FOAFs to publicise it?
It’s also great to hear that MIRI has tried one pamphlet. I would agree that “This one pamphlet we tried didn’t work” points us in the direction that “No pamphlet MIRI can produce will accomplish much”, but that proposition is far from certain. I’d still be interested in the general case of “Can MIRI reduce the chance of UFAI x-risk through pamphlets?”
Pamphlets...don’t work for MIRI’s mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
You may be right. But, it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously in less than an hour (I’ve tested this), and anything that can do that process in a manner that scales well would have a huge impact. What’s the Value of Information on trying to do that? You mention the Sequences and HPMOR (which I’ve sent to a number of people with the instruction “set aside what you’re doing and read this”). I definitely agree that they filter nicely for “able to think”. But they also require a huge time commitment on the part of the reader, whereas a pamphlet or blog post would not.
“Hear ridiculous-sounding proposition, mark it as ridiculous, engage explanation, begin to accept arguments, begin to worry about this, agree to look at further reading”
It could be useful to attach a, “If you didn’t like/agree with the contents of this pamphlet, please tell us why at,” note to any given pamphlet.
Personally I’d find it easier to just look at the contents of the pamphlet with the understanding that 99% of people will ignore it and see if a second draft has the same flaws.
Thanks, Luke. This is an informative reply, and it’s great to hear you have a standard talk! Where can I find it? (or if it’s not publicly available, why isn’t it?)
Do you have more details on the 4 page pamphlet? I would be interested in seeing it, if it still exists. Obviously nobody would get from the single premise “This one pamphlet we tried didn’t work” to the conclusion “pamphlets don’t work”, so I’d still be interested in the general case of “Can MIRI reduce the chance of UFAI x-risk through pamphlets?”
Pamphlets work for wells in Africa. They don’t work for MIRI’s mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
I’d also love to know your reasoning behind this statement:
I am willing to believe the second sentence, but given that it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously (I’ve tested this), I would like to consider ways in which we can spread this.
Pamphlets work for wells in Africa. They don’t work for MIRI’s mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
Eliezer spent SIAI’s early years appealing directly to people about AI. Some good people found him, but the people were being filtered for “interest in future technology” rather than “able to think,” and thus when Eliezer would make basic arguments about e.g. the orthogonality thesis or basic AI drives, the responses he would get were basically random (except for the few good people). So Eliezer wrote The Sequences and HPMoR and now the filter is “able to think” or at least “interest in improving one’s thinking,” and these people, in our experience, are much more likely to do useful things when we present the case for EA, for x-risk reduction, for FAI research, etc.
Still, we keep trying direct mission appeals, to some extent. I’ve given my standard talk, currently titled “Effective Altruism and Machine Intelligence,” at Quixey, Facebook, and Heroku. This talk explains effective altruism, astronomical stakes, the x-risk landscape, and the challenge of FAI, all in 25 minutes. I don’t know yet how much good effect this talk will have. There’s Facing the Intelligence Explosion and the forthcoming Smarter Than Us. I’ve spent a fair amount of time promoting Our Final Invention.
I don’t think we can get much of anywhere with a 1-page pamphlet, though. We tried a 4-page pamphlet once; it accomplished nothing.
Didn’t you get convinced about AI risk by reading a short paragraph of I. J. Good?
Certainly there exist people who will be pushed to useful action by a pamphlet. They’re fairly common for wells in Africa, and rare for risks from self-improving AI. To get 5 “hits” with well pamphlets, you’ve got to distribute maybe 1000 pamphlets. To get 5 hits with self-improving AI pamphlets, you’ve got to distribute maybe 100,000 pamphlets. Obviously you should be able to target the pamphlets better than that, but then distribution and planning costs are a lot higher, and the cost per New Useful Person look higher to me on that plan than distributing HPMoR to leading universities and tech companies, which is a plan for which we already have good evidence of effectiveness, and which we are therefore doing.
Yes.
But MIRI’s ideas have now influenced the mainstream. Since 2011 we have had Norvig & Russell, Barrat, etc, providing some proof by authority and social proof.
The next step is not to popularize the ideas to a mass audience, but to continue targeting the relevant elite audience, e.g. Gary Marcus (not that he really gets it).
HPMOR has had some success at reaching the younger and more flexible of these, but bringing some more senior people on board will allow the junior researchers to work on MIRI-style work without ruining their careers—as-is, some are doing it as a part-time hobby during a PhD on another topic, which is a precarious situation.
MIRI is actually having some success at this. It seems that this audience can now be targeted with a decent chance of success and high value for that success.
Here I am talking about the academic community, but the forward-thinking tech-millionaire community is a harder nut to crack and probably needs a separate plan.
I would hesitate to use failure during “SIAI’s early years” to justify the ease or difficulty of the task. First, the organization seems far more capable now than it was at the time. Second, the landscape has shifted dramatically even in the last few years. Limited AI is continuing to expand and with it discussion of the potential impacts (most of it ill-informed, but still).
While I share your skepticism about pamphlets as such, I do tend to think that MIRI has a greater chance of shifting the odds away from UFAI with persuasion/education rather than trying to build an FAI or doing mathematical research.
I agree and would also add that “Eliezer failed in 2001 to convince many people” does not imply “Eliezer in 2013 is incapable of persuading people”. From his writings, I understand he has changed his views considerably in the last dozen years.
Who says the speculation of potential impacts is damagingly ill-informed? Just because people think of “AI” and then jump to “robots” and then “robots who are used to replace workers, destroy all our jobs, and then rise up in revolution as a robotic resurrection of Communism” doesn’t mean they’re not correctly reasoning that the creation of AI is dangerous.
The next time you give your talk, record it, and put it on YouTube.
Thanks, Luke. This is an informative reply, and it’s great to hear you have a standard talk! Is it publicly available, and where can I see it if so? Maybe MIRI should ask FOAFs to publicise it?
It’s also great to hear that MIRI has tried one pamphlet. I would agree that “This one pamphlet we tried didn’t work” points us in the direction that “No pamphlet MIRI can produce will accomplish much”, but that proposition is far from certain. I’d still be interested in the general case of “Can MIRI reduce the chance of UFAI x-risk through pamphlets?”
You may be right. But, it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously in less than an hour (I’ve tested this), and anything that can do that process in a manner that scales well would have a huge impact. What’s the Value of Information on trying to do that? You mention the Sequences and HPMOR (which I’ve sent to a number of people with the instruction “set aside what you’re doing and read this”). I definitely agree that they filter nicely for “able to think”. But they also require a huge time commitment on the part of the reader, whereas a pamphlet or blog post would not.
For what value of “taking seriously” is that statement true?
“Hear ridiculous-sounding proposition, mark it as ridiculous, engage explanation, begin to accept arguments, begin to worry about this, agree to look at further reading”
It could be useful to attach a, “If you didn’t like/agree with the contents of this pamphlet, please tell us why at,” note to any given pamphlet.
Personally I’d find it easier to just look at the contents of the pamphlet with the understanding that 99% of people will ignore it and see if a second draft has the same flaws.
Thanks, Luke. This is an informative reply, and it’s great to hear you have a standard talk! Where can I find it? (or if it’s not publicly available, why isn’t it?)
Do you have more details on the 4 page pamphlet? I would be interested in seeing it, if it still exists. Obviously nobody would get from the single premise “This one pamphlet we tried didn’t work” to the conclusion “pamphlets don’t work”, so I’d still be interested in the general case of “Can MIRI reduce the chance of UFAI x-risk through pamphlets?”
I’d also love to know your reasoning behind this statement: I am willing to believe the second sentence, but given that it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously (I’ve tested this), I would like to consider ways in which we can spread this.
There has got to be enough writing by now that an effective chain mail can be written.
ETA: The chain mail suggestion isn’t knocked down in luke’s comment. If it’s not relevant or worthy of acknowledging, please explain why.
ETA2: As annoying as some chain mail might be, it does work because it does get around. It can be a very effective method of spreading an idea.