That doesn’t doom FAI research to eternal neglect, it just means FAI outreach people need to be cognizant of the fact that they’re fighting an uphill battle toward persuasion that most outreach and marketing campaigns don’t have to face.
I agree that FAI outreach is hard PR wise. Terminator did succeed in putting memes about an evil skynet into public consciousness but those memes and not really the ones we want even if they make some people opposed to AGI research.
The kinds of memes we want to push are more complex. I also don’t know if we actually have decided which memes we want to push. I personally don’t know enough about FAI to be confident in deciding which memes benefits the agenda of MIRI and FHI. If MIRI wants more PR the first step would be to articulate what kind of memes it actually wants to transmit to a broader public.
My impression is that it fuels cynicism and dissent. I support its existence because I think different tactics work on different people.
But we don’t want “dissent”. Cooperation in the Makerspaces that someone like Bre plays a large role are much better than dissent. Focusing on increasing dissent is pointless if you don’t provide alternatives.
In the latter case, you may cover topics you believe to be more important, but your station will be too easy for people uninterested in those topics to ignore.
In the 21st century news sources such as the Economist and Foreign Policy that don’t use pictures to illustrate their stories but write for a high level audience increased their subscriber while outlets that try to pander to everyone like the New York Times lost readership and had to lay off many journalists.
As far as written text goes the people who try to pander to everyone did mostly lose in the last decade. Mainstream media lost a lot of it’s power over the last two decades.
Getting a book recommend by Tim Ferriss in the Random show is much more valuable than getting a book recommended by the New York Time. Tim Ferriss recommendation might have more strength than anyone besides Oprah.
But even when we look at Oprah, does she try to pander to mainstream views in the usual sense of the word? I don’t think she does. A lot of people don’t like Oprah. It would be a losing move for Oprah to avoid talking about spirituality in a sense that makes some people hate her. If Oprah would go that way she would lose her base.
If you try to appeal to everyone you will appeal to no one.
If you don’t want to make a TV station that fiances itself by selling advertising that program crap into people, I don’t think it makes sense to even try to appeal to the mainstream when you start a new TV channel.
Oprah doesn’t need everyone to like her. She wants the largest viewership possible.
MIRI doesn’t need everyone to support it. It wants the most supporters possible.
They don’t need to appeal to everyone but they probably should appeal to a wider audience of people than they currently do (evidenced by the only ~10 FAI researches in the world) - and a different audience requires a different presentation of the ideas in order to be optimally effective.
I don’t think pointing new people toward Less Wrong would be as effective as just creating a new pitch just for “ordinary people.” Luke’s Reddit AMA, Singularity 1-on-1 interview, and Facing the Singularity ebook were pretty good for this but it doesn’t seem like many x-risk researchers have put much energy into marketing themselves to the broader public. (To be fair, in doing so, they might do more harm than good.)
The kinds of memes we want to push are more complex. I also don’t know if we actually have decided which memes we want to push. I personally don’t know enough about FAI to be confident in deciding which memes benefits the agenda of MIRI and FHI. If MIRI wants more PR the first step would be to articulate what kind of memes it actually wants to transmit to a broader public.
This was one of the suggestions in my post. :) Though I’m not sure it’s possible to communicate about AI and only spread “complex” memes. I think about memes more in terms of positive and negative effects rather than in terms of their accuracy.
MIRI doesn’t need everyone to support it. It wants the most supporters possible.
It don’t think that’s the case. MIRI cares a lot more about convincing the average AI researcher than it cares about convincing the average person who watches CNN.
If you start a PR campain about AI risk that results into bringing a lot of luddites into the AGI debate, it might be harder for MIRI to convince AI researchers to treat UFAI as a serious risk not easier because the average AI person might think how the luddites oppose AGI for all the wrong reasons. He’s not a luddite so why should he worry about UFAI?
If you look at environmental policy reducing mercury pollution and reducing CO2 emissions are both important priorities.
If you just look at what’s talked about in mainstream media you will find a focus on CO2 emissions. I think few people know how good the policy that the EPA policy under Obama about mercury pollution has been. The EPA did a really great move to reduce mercury pollution but it didn’t hit major headlines.
The policy wasn’t a result of a press campaign. It mostly happened silently in the background. On the other hand the fight about CO2 emissions is very intensive and the Obama administration didn’t get much done on that front.
I think about memes more in terms of positive and negative effects rather than in terms of their accuracy.
That’s the sort of thing that’s better not said in public if you are actually serious about making an impact. If you want to say it say it in a way that takes a full paragraph of multiple sentences and that’s not easily quoted by someone at gawker who writes an article about you in five years when you do have a public profile. Bonus points for using vobulary that allows people on LW to understand you express that idea but not the average person who reads a gawker article.
I also something that contradicts the goal you layed out above. You said you wanted to spread the meme: “Belief without evidence is bad.”
If you start pushing memes because you like the effect and not because they are supported by good evidence you don’t get “Belief without evidence is bad.”
If you start a PR campain about AI risk that results into bringing a lot of luddites into the AGI debate, it might be harder for MIRI to convince AI researchers to treat UFAI as a serious risk not easier because the average AI person might think how the luddites oppose AGI for all the wrong reasons. He’s not a luddite so why should he worry about UFAI?
Fair enough. I still believe there could be benefits to gaining wider support but I agree that this is an area that will be mainly determined by the actions of elite specialized thinkers and the very powerful.
I also something that contradicts the goal you layed out above. You said you wanted to spread the meme: “Belief without evidence is bad.” If you start pushing memes because you like the effect and not because they are supported by good evidence you don’t get “Belief without evidence is bad.”
I’m not sure I see a contradiction there. I can see that if I say things that aren’t true and people believe them just because I said them, that would be believing without evidence. But “belief without evidence is bad” doesn’t have to be true 100% of the time in order for it to be a good, safe meme to spread. If your argument is that the spreading of “Utility > Truth” interferes with “Belief without evidence is bad” so that the two will largely cancel out, then (1) I didn’t include “Utility > Truth” on my incomplete list of safe memes precisely because I don’t think it’s safe and (2) the argument would only be persuasive if the two memes usually interfered with each other, which I don’t think is the case. In most situations, people knowing the truth is a really desirable thing. Journalism and marketing are exceptions where it could make sense to oversimplify a message in order for laypeople to understand it, hence making the meme less accurate but more effective at getting people interested (in which case, they’ll hopefully continue researching until they have a more accurate understanding). Also, (3) even if two memes contradict each other, using both in tandem could theoretically yield more utilons than using either one alone (or neither), though I’d expect examples to be rare.
By the way, I emailed Adbusters about if/how they measure the effectiveness of their culture jamming campaigns. I’ll let you know when I get a response.
I agree that FAI outreach is hard PR wise. Terminator did succeed in putting memes about an evil skynet into public consciousness but those memes and not really the ones we want even if they make some people opposed to AGI research.
The kinds of memes we want to push are more complex. I also don’t know if we actually have decided which memes we want to push. I personally don’t know enough about FAI to be confident in deciding which memes benefits the agenda of MIRI and FHI. If MIRI wants more PR the first step would be to articulate what kind of memes it actually wants to transmit to a broader public.
But we don’t want “dissent”. Cooperation in the Makerspaces that someone like Bre plays a large role are much better than dissent. Focusing on increasing dissent is pointless if you don’t provide alternatives.
In the 21st century news sources such as the Economist and Foreign Policy that don’t use pictures to illustrate their stories but write for a high level audience increased their subscriber while outlets that try to pander to everyone like the New York Times lost readership and had to lay off many journalists.
As far as written text goes the people who try to pander to everyone did mostly lose in the last decade. Mainstream media lost a lot of it’s power over the last two decades. Getting a book recommend by Tim Ferriss in the Random show is much more valuable than getting a book recommended by the New York Time. Tim Ferriss recommendation might have more strength than anyone besides Oprah.
But even when we look at Oprah, does she try to pander to mainstream views in the usual sense of the word? I don’t think she does. A lot of people don’t like Oprah. It would be a losing move for Oprah to avoid talking about spirituality in a sense that makes some people hate her. If Oprah would go that way she would lose her base.
If you try to appeal to everyone you will appeal to no one.
If you don’t want to make a TV station that fiances itself by selling advertising that program crap into people, I don’t think it makes sense to even try to appeal to the mainstream when you start a new TV channel.
Oprah doesn’t need everyone to like her. She wants the largest viewership possible. MIRI doesn’t need everyone to support it. It wants the most supporters possible.
They don’t need to appeal to everyone but they probably should appeal to a wider audience of people than they currently do (evidenced by the only ~10 FAI researches in the world) - and a different audience requires a different presentation of the ideas in order to be optimally effective.
I don’t think pointing new people toward Less Wrong would be as effective as just creating a new pitch just for “ordinary people.” Luke’s Reddit AMA, Singularity 1-on-1 interview, and Facing the Singularity ebook were pretty good for this but it doesn’t seem like many x-risk researchers have put much energy into marketing themselves to the broader public. (To be fair, in doing so, they might do more harm than good.)
This was one of the suggestions in my post. :) Though I’m not sure it’s possible to communicate about AI and only spread “complex” memes. I think about memes more in terms of positive and negative effects rather than in terms of their accuracy.
It don’t think that’s the case. MIRI cares a lot more about convincing the average AI researcher than it cares about convincing the average person who watches CNN.
If you start a PR campain about AI risk that results into bringing a lot of luddites into the AGI debate, it might be harder for MIRI to convince AI researchers to treat UFAI as a serious risk not easier because the average AI person might think how the luddites oppose AGI for all the wrong reasons. He’s not a luddite so why should he worry about UFAI?
If you look at environmental policy reducing mercury pollution and reducing CO2 emissions are both important priorities. If you just look at what’s talked about in mainstream media you will find a focus on CO2 emissions. I think few people know how good the policy that the EPA policy under Obama about mercury pollution has been. The EPA did a really great move to reduce mercury pollution but it didn’t hit major headlines.
The policy wasn’t a result of a press campaign. It mostly happened silently in the background. On the other hand the fight about CO2 emissions is very intensive and the Obama administration didn’t get much done on that front.
That’s the sort of thing that’s better not said in public if you are actually serious about making an impact. If you want to say it say it in a way that takes a full paragraph of multiple sentences and that’s not easily quoted by someone at gawker who writes an article about you in five years when you do have a public profile. Bonus points for using vobulary that allows people on LW to understand you express that idea but not the average person who reads a gawker article.
I also something that contradicts the goal you layed out above. You said you wanted to spread the meme: “Belief without evidence is bad.” If you start pushing memes because you like the effect and not because they are supported by good evidence you don’t get “Belief without evidence is bad.”
Fair enough. I still believe there could be benefits to gaining wider support but I agree that this is an area that will be mainly determined by the actions of elite specialized thinkers and the very powerful.
I’m not sure I see a contradiction there. I can see that if I say things that aren’t true and people believe them just because I said them, that would be believing without evidence. But “belief without evidence is bad” doesn’t have to be true 100% of the time in order for it to be a good, safe meme to spread. If your argument is that the spreading of “Utility > Truth” interferes with “Belief without evidence is bad” so that the two will largely cancel out, then (1) I didn’t include “Utility > Truth” on my incomplete list of safe memes precisely because I don’t think it’s safe and (2) the argument would only be persuasive if the two memes usually interfered with each other, which I don’t think is the case. In most situations, people knowing the truth is a really desirable thing. Journalism and marketing are exceptions where it could make sense to oversimplify a message in order for laypeople to understand it, hence making the meme less accurate but more effective at getting people interested (in which case, they’ll hopefully continue researching until they have a more accurate understanding). Also, (3) even if two memes contradict each other, using both in tandem could theoretically yield more utilons than using either one alone (or neither), though I’d expect examples to be rare.
By the way, I emailed Adbusters about if/how they measure the effectiveness of their culture jamming campaigns. I’ll let you know when I get a response.