Oprah doesn’t need everyone to like her. She wants the largest viewership possible.
MIRI doesn’t need everyone to support it. It wants the most supporters possible.
They don’t need to appeal to everyone but they probably should appeal to a wider audience of people than they currently do (evidenced by the only ~10 FAI researches in the world) - and a different audience requires a different presentation of the ideas in order to be optimally effective.
I don’t think pointing new people toward Less Wrong would be as effective as just creating a new pitch just for “ordinary people.” Luke’s Reddit AMA, Singularity 1-on-1 interview, and Facing the Singularity ebook were pretty good for this but it doesn’t seem like many x-risk researchers have put much energy into marketing themselves to the broader public. (To be fair, in doing so, they might do more harm than good.)
The kinds of memes we want to push are more complex. I also don’t know if we actually have decided which memes we want to push. I personally don’t know enough about FAI to be confident in deciding which memes benefits the agenda of MIRI and FHI. If MIRI wants more PR the first step would be to articulate what kind of memes it actually wants to transmit to a broader public.
This was one of the suggestions in my post. :) Though I’m not sure it’s possible to communicate about AI and only spread “complex” memes. I think about memes more in terms of positive and negative effects rather than in terms of their accuracy.
MIRI doesn’t need everyone to support it. It wants the most supporters possible.
It don’t think that’s the case. MIRI cares a lot more about convincing the average AI researcher than it cares about convincing the average person who watches CNN.
If you start a PR campain about AI risk that results into bringing a lot of luddites into the AGI debate, it might be harder for MIRI to convince AI researchers to treat UFAI as a serious risk not easier because the average AI person might think how the luddites oppose AGI for all the wrong reasons. He’s not a luddite so why should he worry about UFAI?
If you look at environmental policy reducing mercury pollution and reducing CO2 emissions are both important priorities.
If you just look at what’s talked about in mainstream media you will find a focus on CO2 emissions. I think few people know how good the policy that the EPA policy under Obama about mercury pollution has been. The EPA did a really great move to reduce mercury pollution but it didn’t hit major headlines.
The policy wasn’t a result of a press campaign. It mostly happened silently in the background. On the other hand the fight about CO2 emissions is very intensive and the Obama administration didn’t get much done on that front.
I think about memes more in terms of positive and negative effects rather than in terms of their accuracy.
That’s the sort of thing that’s better not said in public if you are actually serious about making an impact. If you want to say it say it in a way that takes a full paragraph of multiple sentences and that’s not easily quoted by someone at gawker who writes an article about you in five years when you do have a public profile. Bonus points for using vobulary that allows people on LW to understand you express that idea but not the average person who reads a gawker article.
I also something that contradicts the goal you layed out above. You said you wanted to spread the meme: “Belief without evidence is bad.”
If you start pushing memes because you like the effect and not because they are supported by good evidence you don’t get “Belief without evidence is bad.”
If you start a PR campain about AI risk that results into bringing a lot of luddites into the AGI debate, it might be harder for MIRI to convince AI researchers to treat UFAI as a serious risk not easier because the average AI person might think how the luddites oppose AGI for all the wrong reasons. He’s not a luddite so why should he worry about UFAI?
Fair enough. I still believe there could be benefits to gaining wider support but I agree that this is an area that will be mainly determined by the actions of elite specialized thinkers and the very powerful.
I also something that contradicts the goal you layed out above. You said you wanted to spread the meme: “Belief without evidence is bad.” If you start pushing memes because you like the effect and not because they are supported by good evidence you don’t get “Belief without evidence is bad.”
I’m not sure I see a contradiction there. I can see that if I say things that aren’t true and people believe them just because I said them, that would be believing without evidence. But “belief without evidence is bad” doesn’t have to be true 100% of the time in order for it to be a good, safe meme to spread. If your argument is that the spreading of “Utility > Truth” interferes with “Belief without evidence is bad” so that the two will largely cancel out, then (1) I didn’t include “Utility > Truth” on my incomplete list of safe memes precisely because I don’t think it’s safe and (2) the argument would only be persuasive if the two memes usually interfered with each other, which I don’t think is the case. In most situations, people knowing the truth is a really desirable thing. Journalism and marketing are exceptions where it could make sense to oversimplify a message in order for laypeople to understand it, hence making the meme less accurate but more effective at getting people interested (in which case, they’ll hopefully continue researching until they have a more accurate understanding). Also, (3) even if two memes contradict each other, using both in tandem could theoretically yield more utilons than using either one alone (or neither), though I’d expect examples to be rare.
By the way, I emailed Adbusters about if/how they measure the effectiveness of their culture jamming campaigns. I’ll let you know when I get a response.
Oprah doesn’t need everyone to like her. She wants the largest viewership possible. MIRI doesn’t need everyone to support it. It wants the most supporters possible.
They don’t need to appeal to everyone but they probably should appeal to a wider audience of people than they currently do (evidenced by the only ~10 FAI researches in the world) - and a different audience requires a different presentation of the ideas in order to be optimally effective.
I don’t think pointing new people toward Less Wrong would be as effective as just creating a new pitch just for “ordinary people.” Luke’s Reddit AMA, Singularity 1-on-1 interview, and Facing the Singularity ebook were pretty good for this but it doesn’t seem like many x-risk researchers have put much energy into marketing themselves to the broader public. (To be fair, in doing so, they might do more harm than good.)
This was one of the suggestions in my post. :) Though I’m not sure it’s possible to communicate about AI and only spread “complex” memes. I think about memes more in terms of positive and negative effects rather than in terms of their accuracy.
It don’t think that’s the case. MIRI cares a lot more about convincing the average AI researcher than it cares about convincing the average person who watches CNN.
If you start a PR campain about AI risk that results into bringing a lot of luddites into the AGI debate, it might be harder for MIRI to convince AI researchers to treat UFAI as a serious risk not easier because the average AI person might think how the luddites oppose AGI for all the wrong reasons. He’s not a luddite so why should he worry about UFAI?
If you look at environmental policy reducing mercury pollution and reducing CO2 emissions are both important priorities. If you just look at what’s talked about in mainstream media you will find a focus on CO2 emissions. I think few people know how good the policy that the EPA policy under Obama about mercury pollution has been. The EPA did a really great move to reduce mercury pollution but it didn’t hit major headlines.
The policy wasn’t a result of a press campaign. It mostly happened silently in the background. On the other hand the fight about CO2 emissions is very intensive and the Obama administration didn’t get much done on that front.
That’s the sort of thing that’s better not said in public if you are actually serious about making an impact. If you want to say it say it in a way that takes a full paragraph of multiple sentences and that’s not easily quoted by someone at gawker who writes an article about you in five years when you do have a public profile. Bonus points for using vobulary that allows people on LW to understand you express that idea but not the average person who reads a gawker article.
I also something that contradicts the goal you layed out above. You said you wanted to spread the meme: “Belief without evidence is bad.” If you start pushing memes because you like the effect and not because they are supported by good evidence you don’t get “Belief without evidence is bad.”
Fair enough. I still believe there could be benefits to gaining wider support but I agree that this is an area that will be mainly determined by the actions of elite specialized thinkers and the very powerful.
I’m not sure I see a contradiction there. I can see that if I say things that aren’t true and people believe them just because I said them, that would be believing without evidence. But “belief without evidence is bad” doesn’t have to be true 100% of the time in order for it to be a good, safe meme to spread. If your argument is that the spreading of “Utility > Truth” interferes with “Belief without evidence is bad” so that the two will largely cancel out, then (1) I didn’t include “Utility > Truth” on my incomplete list of safe memes precisely because I don’t think it’s safe and (2) the argument would only be persuasive if the two memes usually interfered with each other, which I don’t think is the case. In most situations, people knowing the truth is a really desirable thing. Journalism and marketing are exceptions where it could make sense to oversimplify a message in order for laypeople to understand it, hence making the meme less accurate but more effective at getting people interested (in which case, they’ll hopefully continue researching until they have a more accurate understanding). Also, (3) even if two memes contradict each other, using both in tandem could theoretically yield more utilons than using either one alone (or neither), though I’d expect examples to be rare.
By the way, I emailed Adbusters about if/how they measure the effectiveness of their culture jamming campaigns. I’ll let you know when I get a response.