They have the right conclusion (plausible AI takeover) for slightly wrong reasons. “hate [humans] or resent [humans] or desire higher status than [humans]” are slightly different values than ours (even if just like the values humans often have towards other groups)
So we can gradually nudge people closer to the truth a bit at a time by saying “Plus, it’s unlikely that they’ll value X, so even if they do something with the universe it will not have X”
But we don’t have to introduce them to the full truth immediately, as long as we don’t base any further arguments on falsehoods they believe.
If someone is convinced of the need for asteroid defense because asteroids could destroy a city, you aren’t obligated to tell them that larger asteroids could destroy all humanity when you’re asking for money. Even if you believe bigger asteroids to be more likely.
I don’t think it’s dark epistemology to avoid confusing people if they’ve already got the right idea.
So we can gradually nudge people closer to the truth a bit at a time by saying “Plus, it’s unlikely that they’ll value X, so even if they do something with the universe it will not have X”
Writing up high-quality arguments for your full position might be a better tool than “nudging people closer to the truth a bit at a time”. Correct ideas have a scholar appeal due to internal coherence, even if they need to overcome plenty of cached misconceptions, but making that case requires a certain critical mass of published material.
I do see value in that, but I’m thinking of a TV commercial or youtube video with a terminator style look and feel. Though possibly emphasizing that against real superintelligence, there would be no war.
I can’t immediately remember a way to simplify “the space of all possible values is huge and human like values are a tiny part of that” and I don’t think that would resonate at all.
I do see value in that, but I’m thinking of a TV commercial or youtube video with a terminator style look and feel.
A large portion of the world has already seen a Terminator flick, or the Matrix. The AI-is-evil-nonhuman-threat meme is already well established in the wild, to the point of characterture. The AI-is-an-innocent-child meme wasn’t as popular—AI is the only example I can think of and not many people saw it.
And even though the Terminator and the Matrix are far from realistic, they did at least get the general shape of the outcome correct—humans lose.
What would your message add over this in reach or content?
At this point the meme is almost oversaturated and it is difficult for people to take seriously. Did “The Day After Tommorrow” help or hinder the environmental movement?
This might not fit the terminator motif anymore, but:
That there are people working on a way to target AI development so it reliably looks more like R2D2, Johnny 5, Commander Data, Sonny, Marvin… ok that’s all I can think of but just for fun I’ll get these from wikipedia:
Gort, Bishop from aliens, almost everything from the jetsons, Transformers (autobots anyway), the Iron Giant, and KITT
And again we don’t have to explain that AI done right will be orders of magnitude more helpful than any of these.
It’s interesting that friendly-AI was so common in earlier decades and then this seemed to shift in the 90′s.
As for AI-positive advertisements, that somehow reminded me. . .
did you ever see that popular web-viral anti-banking video called Zeitgeist? In the sequel he seems to have realized that just being a critic wasn’t enough, so suddenly the 2nd part of Zeitgeist addendum turns into a startrek-ish utopia proposal out of nowhere. I forget the name, but it is basically some architect’s pseudo-singularity (AI solves all our problems and makes these beautiful new cities for us but isn’t really conscious or dangerous).
I went to a screening of that film in LA, and I was amazed at how entranced the audience seemed to be. The questions at the end were pretty funny too -
“so .. there won’t be any money? And the AI’s will build us whatever we want?”
“Yes”
“So, what if I want to turn all of Texas into my house?”
They have the right conclusion (plausible AI takeover) for slightly wrong reasons. “hate [humans] or resent [humans] or desire higher status than [humans]” are slightly different values than ours (even if just like the values humans often have towards other groups)
So we can gradually nudge people closer to the truth a bit at a time by saying “Plus, it’s unlikely that they’ll value X, so even if they do something with the universe it will not have X”
But we don’t have to introduce them to the full truth immediately, as long as we don’t base any further arguments on falsehoods they believe.
If someone is convinced of the need for asteroid defense because asteroids could destroy a city, you aren’t obligated to tell them that larger asteroids could destroy all humanity when you’re asking for money. Even if you believe bigger asteroids to be more likely.
I don’t think it’s dark epistemology to avoid confusing people if they’ve already got the right idea.
Writing up high-quality arguments for your full position might be a better tool than “nudging people closer to the truth a bit at a time”. Correct ideas have a scholar appeal due to internal coherence, even if they need to overcome plenty of cached misconceptions, but making that case requires a certain critical mass of published material.
I do see value in that, but I’m thinking of a TV commercial or youtube video with a terminator style look and feel. Though possibly emphasizing that against real superintelligence, there would be no war.
I can’t immediately remember a way to simplify “the space of all possible values is huge and human like values are a tiny part of that” and I don’t think that would resonate at all.
A large portion of the world has already seen a Terminator flick, or the Matrix. The AI-is-evil-nonhuman-threat meme is already well established in the wild, to the point of characterture. The AI-is-an-innocent-child meme wasn’t as popular—AI is the only example I can think of and not many people saw it.
And even though the Terminator and the Matrix are far from realistic, they did at least get the general shape of the outcome correct—humans lose.
What would your message add over this in reach or content?
At this point the meme is almost oversaturated and it is difficult for people to take seriously. Did “The Day After Tommorrow” help or hinder the environmental movement?
This might not fit the terminator motif anymore, but:
That there are people working on a way to target AI development so it reliably looks more like R2D2, Johnny 5, Commander Data, Sonny, Marvin… ok that’s all I can think of but just for fun I’ll get these from wikipedia:
Gort, Bishop from aliens, almost everything from the jetsons, Transformers (autobots anyway), the Iron Giant, and KITT
And again we don’t have to explain that AI done right will be orders of magnitude more helpful than any of these.
It’s interesting that friendly-AI was so common in earlier decades and then this seemed to shift in the 90′s.
As for AI-positive advertisements, that somehow reminded me. . .
did you ever see that popular web-viral anti-banking video called Zeitgeist? In the sequel he seems to have realized that just being a critic wasn’t enough, so suddenly the 2nd part of Zeitgeist addendum turns into a startrek-ish utopia proposal out of nowhere. I forget the name, but it is basically some architect’s pseudo-singularity (AI solves all our problems and makes these beautiful new cities for us but isn’t really conscious or dangerous).
I went to a screening of that film in LA, and I was amazed at how entranced the audience seemed to be. The questions at the end were pretty funny too -
“so .. there won’t be any money? And the AI’s will build us whatever we want?”
“Yes”
“So, what if I want to turn all of Texas into my house?”
. . .
You are thinking of Jacque Fresco.