Any ideas. For the SIAI it would probably be existential risks then UFAI later, in general it could be rationality or evolution or atheism or whatever.
What is the whole industry you speak of? Self-help, religion, marketing? And what additional advertising? I think that spreading the ideas is important as well, I”m just not sure what you are considering.
Advertising/marketing. Short of ashiest bus ads, I can’t think of anything that’s been done.
All I’m really suggesting is that we focus on mass persuasion in the way it has been proven to be most efficient. What that actually amounts to will depend on the target audience, and how much money is available, among other things.
Did you mean “atheist bus ads”? I actually find strict-universal-atheism to be irrational compared to agnosticism because of the SA and the importance of knowing the limits of certainty, but that’s unrelated and I digress.
I’ve long suspected that writing popular books on the subject would be an effective strategy for mass persuasion. Kurzweil has certainly had a history of some success there, although he also brings some negative publicity due to his association with dubious supplements and the expensive SingUniversity. It will be interesting to see how EY’s book turns out and is received.
I’m actually skeptical about how far rationality itself can go towards mass persuasion. Building a rational case is certainly important, but the content of your case is even more important (regardless of its rationality).
On that note I suspect that bridging a connection to the mainstream’s beliefs and values would go a ways towards increasing mass marketability. You have to consider not just the rationality of ideas, but the utility of ideas.
It would be interesting to analyze and compare how emphasizing the hope vs doom aspects of the message would effect popularity. SIAI at the moment appears focused on emphasizing doom and targeting a narrow market: a subset of technophile ‘rationalists’ or atheist intellectuals and wooing academia in particular.
I’m interested in how you’d target mainstream liberal christians or new agers, for example, or even just the intellectual agnostic/atheist mainstream—the types of people who buy books such as the End of Faith, Breaking the Spell, etc etc. Although a good portion of that latter demographic is probably already exposed to the Singularity is Near.
I’m not sure what I’d do, but I’m not a marketing expert either. (Though I am experimenting)
It would probably be possible to make a campaign that took advantage of UFAI in sci-fi. AI’s taking over the world isn’t a difficult concept to get across, so the ad would just need to persuade that it’s possible in reality, and there is a group working towards a solution.
I hope you haven’t forgotten our long drawn out discussion, as I do think that one is worthwhile.
AI’s taking over the world isn’t a difficult concept to get across
AIs taking over the world because they have implausibly human-like cognitive architectures and they hate us or resent us or desire higher status than us is an easy concept to get across. It is also, of course, wrong. An AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn’t even have a term for human values is more difficult; because of anthropomorphic bias, it will be much less salient to people, even if it is more probable.
They have the right conclusion (plausible AI takeover) for slightly wrong reasons. “hate [humans] or resent [humans] or desire higher status than [humans]” are slightly different values than ours (even if just like the values humans often have towards other groups)
So we can gradually nudge people closer to the truth a bit at a time by saying “Plus, it’s unlikely that they’ll value X, so even if they do something with the universe it will not have X”
But we don’t have to introduce them to the full truth immediately, as long as we don’t base any further arguments on falsehoods they believe.
If someone is convinced of the need for asteroid defense because asteroids could destroy a city, you aren’t obligated to tell them that larger asteroids could destroy all humanity when you’re asking for money. Even if you believe bigger asteroids to be more likely.
I don’t think it’s dark epistemology to avoid confusing people if they’ve already got the right idea.
So we can gradually nudge people closer to the truth a bit at a time by saying “Plus, it’s unlikely that they’ll value X, so even if they do something with the universe it will not have X”
Writing up high-quality arguments for your full position might be a better tool than “nudging people closer to the truth a bit at a time”. Correct ideas have a scholar appeal due to internal coherence, even if they need to overcome plenty of cached misconceptions, but making that case requires a certain critical mass of published material.
I do see value in that, but I’m thinking of a TV commercial or youtube video with a terminator style look and feel. Though possibly emphasizing that against real superintelligence, there would be no war.
I can’t immediately remember a way to simplify “the space of all possible values is huge and human like values are a tiny part of that” and I don’t think that would resonate at all.
I do see value in that, but I’m thinking of a TV commercial or youtube video with a terminator style look and feel.
A large portion of the world has already seen a Terminator flick, or the Matrix. The AI-is-evil-nonhuman-threat meme is already well established in the wild, to the point of characterture. The AI-is-an-innocent-child meme wasn’t as popular—AI is the only example I can think of and not many people saw it.
And even though the Terminator and the Matrix are far from realistic, they did at least get the general shape of the outcome correct—humans lose.
What would your message add over this in reach or content?
At this point the meme is almost oversaturated and it is difficult for people to take seriously. Did “The Day After Tommorrow” help or hinder the environmental movement?
This might not fit the terminator motif anymore, but:
That there are people working on a way to target AI development so it reliably looks more like R2D2, Johnny 5, Commander Data, Sonny, Marvin… ok that’s all I can think of but just for fun I’ll get these from wikipedia:
Gort, Bishop from aliens, almost everything from the jetsons, Transformers (autobots anyway), the Iron Giant, and KITT
And again we don’t have to explain that AI done right will be orders of magnitude more helpful than any of these.
It’s interesting that friendly-AI was so common in earlier decades and then this seemed to shift in the 90′s.
As for AI-positive advertisements, that somehow reminded me. . .
did you ever see that popular web-viral anti-banking video called Zeitgeist? In the sequel he seems to have realized that just being a critic wasn’t enough, so suddenly the 2nd part of Zeitgeist addendum turns into a startrek-ish utopia proposal out of nowhere. I forget the name, but it is basically some architect’s pseudo-singularity (AI solves all our problems and makes these beautiful new cities for us but isn’t really conscious or dangerous).
I went to a screening of that film in LA, and I was amazed at how entranced the audience seemed to be. The questions at the end were pretty funny too -
“so .. there won’t be any money? And the AI’s will build us whatever we want?”
“Yes”
“So, what if I want to turn all of Texas into my house?”
AIs taking over the world because they have implausibly human-like cognitive architectures and they hate us or resent us or desire higher status than us is an easy concept to get across. An AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn’t even have a term for human values is more difficult; because of anthropomorphic bias, it will be much less salient to people, even if it is more probable.
I actually come from that outside-LW viewpoint that finds the former scenario involving “human-like cognitive architectures” as vastly more probable than “AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn’t even have a term for human values”.
So it could be that your viewpoint is more likely, and the rest of us are suffering from “anthropomorphic bias”, but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.
So it could be that your viewpoint is more likely, and the rest of us are suffering from “anthropomorphic bias”, but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.
I don’t see how. We could get something like that if we get uploads before AGI, but that would really be more like an enhanced human taking over the world. Aside from that, where’s the self-fulfilling prophecy? If people expect AGIs to exhibit human-like emotions and primate status drives and go terribly wrong as a result, why does that increase the chance that the creators of the first powerful AGI will build human-like emotions and primate status drives into it?
Actual uploads are a far end point along a continuum of human-like cognitive architectures, and have the additional complexity of scanning technology which lags far behind electronics. You don’t need uploads for anthropomorphic AI—you just need to loosely reverse engineer the brain.
Also, “human-like cognitive architectures” is a wide spectrum that does not require human-like emotions or primate status drives—consider the example of Alexithymia.
Understanding human languages is a practical prerequisite for any AI to reach high levels of intelligence, and the implied anthropomorphic cognitive capacities required for true linguistic thinking heavily constrains the design space.
The self-fulfilling prophecy is that anthropomorphic AI will be both easier for us to create and more useful for us—so the bias is correct in a self-reinforcing manner.
What ideas? I’m pretty sure I find whatever you are talking about interesting and shiny, but I’m not quite sure what it even is.
Any ideas. For the SIAI it would probably be existential risks then UFAI later, in general it could be rationality or evolution or atheism or whatever.
What is the whole industry you speak of? Self-help, religion, marketing? And what additional advertising? I think that spreading the ideas is important as well, I”m just not sure what you are considering.
Advertising/marketing. Short of ashiest bus ads, I can’t think of anything that’s been done.
All I’m really suggesting is that we focus on mass persuasion in the way it has been proven to be most efficient. What that actually amounts to will depend on the target audience, and how much money is available, among other things.
Did you mean “atheist bus ads”? I actually find strict-universal-atheism to be irrational compared to agnosticism because of the SA and the importance of knowing the limits of certainty, but that’s unrelated and I digress.
I’ve long suspected that writing popular books on the subject would be an effective strategy for mass persuasion. Kurzweil has certainly had a history of some success there, although he also brings some negative publicity due to his association with dubious supplements and the expensive SingUniversity. It will be interesting to see how EY’s book turns out and is received.
I’m actually skeptical about how far rationality itself can go towards mass persuasion. Building a rational case is certainly important, but the content of your case is even more important (regardless of its rationality).
On that note I suspect that bridging a connection to the mainstream’s beliefs and values would go a ways towards increasing mass marketability. You have to consider not just the rationality of ideas, but the utility of ideas.
It would be interesting to analyze and compare how emphasizing the hope vs doom aspects of the message would effect popularity. SIAI at the moment appears focused on emphasizing doom and targeting a narrow market: a subset of technophile ‘rationalists’ or atheist intellectuals and wooing academia in particular.
I’m interested in how you’d target mainstream liberal christians or new agers, for example, or even just the intellectual agnostic/atheist mainstream—the types of people who buy books such as the End of Faith, Breaking the Spell, etc etc. Although a good portion of that latter demographic is probably already exposed to the Singularity is Near.
I’m not sure what I’d do, but I’m not a marketing expert either. (Though I am experimenting)
It would probably be possible to make a campaign that took advantage of UFAI in sci-fi. AI’s taking over the world isn’t a difficult concept to get across, so the ad would just need to persuade that it’s possible in reality, and there is a group working towards a solution.
I hope you haven’t forgotten our long drawn out discussion, as I do think that one is worthwhile.
AIs taking over the world because they have implausibly human-like cognitive architectures and they hate us or resent us or desire higher status than us is an easy concept to get across. It is also, of course, wrong. An AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn’t even have a term for human values is more difficult; because of anthropomorphic bias, it will be much less salient to people, even if it is more probable.
They have the right conclusion (plausible AI takeover) for slightly wrong reasons. “hate [humans] or resent [humans] or desire higher status than [humans]” are slightly different values than ours (even if just like the values humans often have towards other groups)
So we can gradually nudge people closer to the truth a bit at a time by saying “Plus, it’s unlikely that they’ll value X, so even if they do something with the universe it will not have X”
But we don’t have to introduce them to the full truth immediately, as long as we don’t base any further arguments on falsehoods they believe.
If someone is convinced of the need for asteroid defense because asteroids could destroy a city, you aren’t obligated to tell them that larger asteroids could destroy all humanity when you’re asking for money. Even if you believe bigger asteroids to be more likely.
I don’t think it’s dark epistemology to avoid confusing people if they’ve already got the right idea.
Writing up high-quality arguments for your full position might be a better tool than “nudging people closer to the truth a bit at a time”. Correct ideas have a scholar appeal due to internal coherence, even if they need to overcome plenty of cached misconceptions, but making that case requires a certain critical mass of published material.
I do see value in that, but I’m thinking of a TV commercial or youtube video with a terminator style look and feel. Though possibly emphasizing that against real superintelligence, there would be no war.
I can’t immediately remember a way to simplify “the space of all possible values is huge and human like values are a tiny part of that” and I don’t think that would resonate at all.
A large portion of the world has already seen a Terminator flick, or the Matrix. The AI-is-evil-nonhuman-threat meme is already well established in the wild, to the point of characterture. The AI-is-an-innocent-child meme wasn’t as popular—AI is the only example I can think of and not many people saw it.
And even though the Terminator and the Matrix are far from realistic, they did at least get the general shape of the outcome correct—humans lose.
What would your message add over this in reach or content?
At this point the meme is almost oversaturated and it is difficult for people to take seriously. Did “The Day After Tommorrow” help or hinder the environmental movement?
This might not fit the terminator motif anymore, but:
That there are people working on a way to target AI development so it reliably looks more like R2D2, Johnny 5, Commander Data, Sonny, Marvin… ok that’s all I can think of but just for fun I’ll get these from wikipedia:
Gort, Bishop from aliens, almost everything from the jetsons, Transformers (autobots anyway), the Iron Giant, and KITT
And again we don’t have to explain that AI done right will be orders of magnitude more helpful than any of these.
It’s interesting that friendly-AI was so common in earlier decades and then this seemed to shift in the 90′s.
As for AI-positive advertisements, that somehow reminded me. . .
did you ever see that popular web-viral anti-banking video called Zeitgeist? In the sequel he seems to have realized that just being a critic wasn’t enough, so suddenly the 2nd part of Zeitgeist addendum turns into a startrek-ish utopia proposal out of nowhere. I forget the name, but it is basically some architect’s pseudo-singularity (AI solves all our problems and makes these beautiful new cities for us but isn’t really conscious or dangerous).
I went to a screening of that film in LA, and I was amazed at how entranced the audience seemed to be. The questions at the end were pretty funny too -
“so .. there won’t be any money? And the AI’s will build us whatever we want?”
“Yes”
“So, what if I want to turn all of Texas into my house?”
. . .
You are thinking of Jacque Fresco.
I actually come from that outside-LW viewpoint that finds the former scenario involving “human-like cognitive architectures” as vastly more probable than “AI immediately taking apart the world to use its mass for something else because its goal system is nothing like ours and its utility function doesn’t even have a term for human values”.
So it could be that your viewpoint is more likely, and the rest of us are suffering from “anthropomorphic bias”, but it also could be that anthropomorphic bias is in fact a self-fulfilling prophecy.
I don’t see how. We could get something like that if we get uploads before AGI, but that would really be more like an enhanced human taking over the world. Aside from that, where’s the self-fulfilling prophecy? If people expect AGIs to exhibit human-like emotions and primate status drives and go terribly wrong as a result, why does that increase the chance that the creators of the first powerful AGI will build human-like emotions and primate status drives into it?
Actual uploads are a far end point along a continuum of human-like cognitive architectures, and have the additional complexity of scanning technology which lags far behind electronics. You don’t need uploads for anthropomorphic AI—you just need to loosely reverse engineer the brain.
Also, “human-like cognitive architectures” is a wide spectrum that does not require human-like emotions or primate status drives—consider the example of Alexithymia.
Understanding human languages is a practical prerequisite for any AI to reach high levels of intelligence, and the implied anthropomorphic cognitive capacities required for true linguistic thinking heavily constrains the design space.
The self-fulfilling prophecy is that anthropomorphic AI will be both easier for us to create and more useful for us—so the bias is correct in a self-reinforcing manner.