Changing accepted public opinion and Skynet
Michael Annisimov has put up a website called Terminator Salvation: Preventing Skynet, which will host a series of essays on the topic of human-friendly artificial intelligence. Three rather good essays are already up there, including an old classic by Eliezer. The association with a piece of fiction is probably unhelpful, but the publicity surrounding the new terminator film is probably worth it.
What rational strategies can we employ to maximize the impact of such a site, or of publicity for serious issues in general? Most people who read this site will probably not do anything about it, or will find some reason to not take the content of these essays seriously. I say this because I have personally spoken to a lot of clever people about the creation of human-friendly artificial intelligence, and almost everyone finds some reason to not do anything about the problem, even if that reason is “oh, ok, that’s interesting. Anyway, about my new car… ”.
What is the reason underlying people’s indifference to these issues? My personal suspicion is that most people make decisions in their lives by following what everyone else does, rather than by performing a genuine rational analysis.
Consider the rise in social acceptability of making small personal sacrifices and political decisions based on eco-friendliness and your carbon footprint. Many people I know have become very enthusiastic for recycling used food containers and for unplugging appliances that use trivial amounts of power (for example unused phone chargers and electrical equipment on standby). The real reason that people do these things is that they have become socially accepted factoids. Most people in this world, even in this country, lack the mental faculties and knowledge to understand and act upon an argument involving notions of per capita CO2 emissions; instead they respond, at least in my understanding, to the general climate of acceptable opinion, and to opinion formers such as the BBC news website, which has a whole section for “science and environment”. Now, I don’t want to single out environmentalism as the only issue where people form their opinions based upon what is socially acceptable to believe, or to claim that reducing our greenhouse gas emissions is not a worthy cause.
Another great example of socially acceptable factoids (though probably a less serious one) is the detox industry—see, for example, this Times article. I quote:
“Whether or not people believe the biblical story of the Virgin birth, there are plenty of other popular myths that are swallowed with religious fervour over Christmas,” said Martin Wiseman, Visiting Professor of Human Nutrition at the University of Southampton. “Among these is the idea that in some way the body accumulates noxious chemicals during everyday life, and that they need to be expunged by some mysterious process of detoxification, often once a year after Christmas excess. The detox fad — or fads, as there are many methods — is an example of the capacity of people to believe in (and pay for) magic despite the lack of any sound evidence.”
Anyone who takes a serious interest in changing the world would do well to understand the process whereby public opinion as a whole changes on some subject, and attempt to influence that process in an optimal way. How strongly is public opinion correlated with scientific opinion, for example? Particular attention should be paid to the history of the environmentalist movement. See, for example, McKay’s Sustainable energy without the hot air for a great example of a rigorous quantitative analysis in support of various ways of balancing our energy supply and demand, and for a great take on the power of socially accepted factoids, see Phone chargers—the Truth.
So I submit to the wisdom of the Less Wrong groupmind—what can we do to efficiently change the opinion of millions of people on important issues such as freindly AI? Is a site such as the one linked above going to have the intended effect, or is it going to fall upon rationally-deaf ears? What practical advice could we give to Michael and his contributors that would maximize the impact of the site? What other intervantions might be a better use of his time?
Edit: Thanks to those who made constructive suggestions for this post. It has been revised—R
Your environmentalism examples raise another issue. What good is it convincing people of the importance of friendly AI if they respond with similarly ineffective actions? If widespread acceptance of the importance of the environment has led primarily to ineffective behaviours like unplugging phone chargers, washing and sorting containers for recycling and other activities of dubious benefit while at the same time achieving little with regards to reductions in CO2 emissions or slowing the destruction of rainforests then why should we expect widespread acceptance of the importance of friendly AI to actually aid in the development of friendly AI?
Other than donating to the singularity institute it is not even obvious to me what the average person could do to ‘further the cause’ if they were to accept its importance. There seems a fairly high chance that you would instead get useless or counter productive responses given widespread popular acceptance.
I should stress that there have been some important bits of progress that came about as a result of changing public opinion, for example the Stern review. The UK government is finally getting its act together with regards to a major hydroelectric project on the Severn estuary, and we have decided to build new nuclear plants. There is a massive push for developing good energy technologies, such as the fields of synthetic biology, nuclear fusion research and large scale solar. Not to mention advances in wind technology, etc, etc.
The process seems to go
Public opinion --> serious research and public policy planning --> solutions
When Eliezer writes about the “miracle” of evolved morality, he reminds me of that bit from H.M.S. Pinafore where the singers are heaping praise on Rafe Rackstraw for being born an Englishman “despite all the temptations to belong to other nations”. We can imagine that they might have sung quite a similar song in French.
In The Salmon of Doubt, Douglas Adams employs the metaphor of a puddle of water marveling that the pothole it inhabits seems perfectly suited for it.
The Gift We Give To Tomorrow:
i dub thee the weak anthropic morality principle.
One thing that might help change the opinion of people about friendly AI is to make some progress on it. For example, if Eliezer has had any interesting ideas about how to do it in the last five years of thinking about it, it could be helpful to communicate them.
A case that is credible to a large number of people needs to be made that this is a high-probability near-term problem. Without that it’s just a scary sci-fi movie, and frankly there are scarier sci-fi movie concepts out there (e.g. bioterror). Making an analogy with a nuclear bomb is simply not an effective argument. People were not persuaded about global warming with a “greenhouse” analogy. That sort of thing creates a sort of dim level of awareness, but “AI might kill us” is not some new idea; everybody is already aware of that—just like they are aware that a meteor might wipe us out, aliens might invade, or an engineered virus or new life form could kill us all. Which of those things get attention from policy-makers and their advisers, and why?
Besides the weakness of relying on analogy, this analogy isn’t even all that good—it takes concerted and advanced targeted technical dedication to make a nuclear FOOM fast enough to “explode”. It’s a reasonably simple matter to make it FOOM slowly and provide us with electrical power to enhance our standard of living.
If the message is “don’t build Skynet”, funding agencies will say “ok, we won’t fund Skynet” and AI researchers will say “I’m not building Skynet”. If somebody is working on a dangerous project, name names and point fingers.
GIve a chain of reasoning. If some of you rationalists have concluded a significant probability of an AI FOOM coming soon, all you have to do is explicate the reasoning and probabilities involved. If your conclusion is justified, if your ratiocination is sound, you must be able to explicate it in a convincing way, or else how are you so confident in it?
This isn’t really an “awareness” issue—because it’s scary and in some sense reasonable it makes a great story, thus hour after hour of TV, movie blockbusters stretching back through decades, novel after novel after novel.
Make a convincing case and people will start to be convinced by it. I know you think you have already, but you haven’t.
This bears repeating:
(I think your comment contained a couple of unrelated pieces that would have been better in separate comments.)
I disagree strongly. World atmospheric carbon dioxide concentration is still increasing, indeed the rate at which it is increasing is increasing (i.e CO2 output per annum is increasing), so antiprogress is being made upon the global warming problem—yet people still think it’s worth putting more effort into, rather than simply giving up.
Anthropogenic global warming is a low probability, long term problem. At least the most SERIOUS consequences of anthropogenic global warming are long term (e.g. 2050 plus) and low probability (though no scientist would put a number on the probability of human extinction through global warming)
Personally I think that governmental support for reduction in consumption in fossil fuels is at least partly due to energy supply concerns, both in terms of abundance (oil discovery is not increasing) and political concerns (we don’t want to be reliant on russia gas),
From this view we should still try to transition away from most fossil fuel consumption, apart from perhaps coal… and it makes sense to ally with the people concerned with global warming to get the support of the populace.
the global warming threat is an essential reason for not using fossil fuels. There is a lot of coal and a lot of tar-sand available. If we didn’t care about long term problems, we’d just use those.
Coal can be nasty for other reasons apart from greenhouse gases. How much of the coal is low sulphur?
I don’t see tar-sand as a total option, part of the energy mix sure. But we still need to pursue alternatives.
I think this is a convincing case but clearly others disagree. Do you have specific suggestions for arguments that could be expanded upon?
Steven, I’m a little surprised that the paper you reference convinces you of a high probability of imminent danger. I have read this paper several times, and would summarize its relevant points thusly:
We tend to anthropomorphise, so our intuitive ideas about how an AI would behave might be biased. In particular, assuming that an AI will be “friendly” because people are more or less friendly might be wrong.
Through self-improvement, AI might become intelligent enough to accomplish tasks much more quickly and effectively than we expect.
This super-effective AI would have the ability (perhaps just as a side effect of its goal attainment) to wipe out humanity. Because of the bias in (1) we do not give sufficient credibility to this possibility when in fact it is the default scenario unless the AI is constructed very carefully to avoid it.
It might be possible to do that careful construction (that is, create a Friendly AI), if we work hard on achieving that task. It is not impossible.
The only arguments for the likelihood of imminence despite little to none apparent progress toward a machine capable of acting intelligently in the world and rapidly rewriting its own source code are:
A. a “loosely analogous historical surprise”—the above-mentioned nuclear reaction analogy. B. the observation that breakthroughs do not occur on predictable timeframes, so it could happen tomorrow. C. we might already have sufficient prerequisites for the breakthrough to occur (computing power, programming productivity, etc)
I find these points to all be reasonable enough and imagine that most people would agree. The problem is going from this set of “mights” and suggestive analogies to a probability of imminence. You can’t expect to get much traction for something that might happen someday, you have to link from possibility to likelihood. That people make this leap without saying how they got there is why observers refer to the believers as a sort of religious cult. Perhaps the case is made somewhere but I haven’t seen it. I know that Yudkowsky and Hanson debated a closely related topic on Overcoming Bias at some length, but I found Eliezer’s case to be completely unconvincing.
I just don’t see it myself… “Seed AI” (as one example of a sort of scenario sketch) was written almost a decade ago and contains many different requirements. As far as I can see, none of them have had any meaningful progress in the meantime. If multiple or many breakthroughs are necessary, let’s see one of them for starters. One might hypothesize that just one magic bullet brfeakthrough is necessary but that sounds more like a paranoid fantasy than a credible scientific hypothesis.
Now, I’m personally sympathetic to these ideas (check the SIAI donor page if you need proof), and if the lack of a case from possibility to likelihood leaves me cold, it shouldn’t be surprising that society as a whole remains unconvinced.
Given the stakes, if you already accept the expected utility maximization decision principle, it’s enough to become convinced that there is even a nontrivial probability of this happening. The paper seems to be adequate for snapping the reader’s mind out of conviction in the absurdity and impossibility of dangerous AI.
The stakes on the other side of the equation are also the survival of the human race.
Refraining from developing AI unless we can formally prove it is safe may also lead to extinction if it reduces our ability to cope with other existential threats,
“Enough” is ambiguous; your point is true but doesn’t affect Vladimir’s if he meant “enough to justify devoting a large amount of your attention (given the current distribution of allocated attention) to the risk of UFAI hard takeoff”.
Hmm, I was thinking more of being convinced there’s a “significant probability”, for a definition of “significant probability” that may be much lower than the one you intended. I’m not sure if I’d also claim the paper convinces me of a “high probability”. Agreed that it would be more convincing to the general public if there were an argument for that. I may comment more after rereading.
Apparently you and others have some sort of estimate of probability distribution over time leading you to being alarmed enough to demand action. Maybe it’s say “1% chance in the next 20 years of hard takeoff” or something like that. Say what it is and how you got to it from “conceivability” or “non-impossibility”. If there is a reasoned link that can be analyzed producing such a result, it is no longer a leap of faith; it can be reasoned about rationally and discussed in more detail. Don’t get hung up on the number exactly, use a qualitative measure if you like, but the point is how you got there.
I am not attempting to ridicule hard takeoff or Friendly AI, just giving my opinion about the thesis question of this post: “what can we do to efficiently change the opinion of millions of people...”
Hanson’s position was that something like a singularity will occur due to smarter than human Cognition, but he differs from eliezer by claiming that it will be a distributed intelligence analogous to the economy, trillions of smart human uploads and narrow AIs exchanging skills and subroutines.
He still ultimately supports the idea of a fast transition, based on historical transitions. I think robin would say that something midway between 2 weeks and 20 years is reasonable. Ultimately, if you think hanson has a stronger case, you’re still talking about a fast transition to superintelligence that we need to think about very carefully.
Indeed:
See also Economic Growth Given Machine Intelligence:
My current thinking is that AI might be in the space of things we can’t understand.
While we are improving our knowledge of the brain, no one is coming up with simple theories that explains the brain as a whole rather than as bits and pieces with no coherent design, that we can see.
Under this scenario AI is still possible, but if we do make it, it will be done by semi-blindly copying the machinery we have with random tweaks. And if it does start to self-improve it will be doing so with random tweaks only, as it will have our lack of ability to comprehend itself.
Why does AI design need to have anything to do with the brain? (Third Alternative: ab initio development based on a formal normative theory of general intelligence, not a descriptive theory of human intelligence, comprehensible even to us to say nothing of itself once it gets smart enough.)
(Edit: Also, it’s a huge leap from “no one is coming up with simple theories of the brain yet” to “we may well never understand intelligence”.)
A specific AI design need be nothing like the design of the brain. However the brain is the only object we know of in mind space, so having difficulty understanding it is evidence, although very weak, that we may have difficulty understanding minds in general.
We might expect it to be a special case as we are trying to understand methods of understanding, so we are being somewhat self-referential.
If you read my comment you’ll see I only raised it as a possibility, something to try and estimate the probability of, rather than necessarily the most likely case.
What would you estimate the probability of this scenario being, and why?
There might be formal proofs, but they probably are reliant on the definition of things like what understanding is, I’ve been trying to think of mathematical formalisms to explore this question, but I haven’t come up with a satisfactory one yet.
Have you looked at AIXI?
It is trivial to say one AIXI can’t comprehend another instance of AIXI, if by comprehend you mean form an accurate model.
AIXI expects the environment to be computable and is itself incomputable. So if one AIXI comes across another, it won’t be able to form a true model of it.
However I am not sure of the value of this argument as we expect intelligence to be computable.
Seems plausible. However under this model, there’s still room for self-improvement using something like genetic algorithms; that is, it could make small, random tweaks, but find and implement the best ones in much less time than we could possibly do with humans. Then it could still be recursively self-improving.
A lot of us think this scenario is much more likely. Mostly those on the side of Chaos in a particular Grand Narrative. Plug for The Future and its Enemies—arguably one of the most important works in political philosophy from the 20th century.
That is much weaker than the type of RSI that is supposed to cause FOOM. For one you are only altering software not hardware, and secondly I don’t think a system that replaces itself with a random variation, even if it has been tested, will necessarily be better, if it doesn’t understand itself. Random alterations, may cause madness, introduce bugs or other problems a long time after the change.
Note: Deliberate alterations may cause madness or introduce bugs or other problems a long time after the change.
The idea with Eliezer style RSI is formally proved good alterations.
A couple things come to mind. The first is that we have to figure out how much we value attention from people that are not initially rational about it (as this determines how much Dark Art to use). I can see the extra publicity as helping, but if it gets the cause associated with “nut jobs”, then it may pass under the radar of the rational folk and do more harm than good.
The other thing that comes to mind is that this looks like an example of people using “far” reasoning. Learning how to get people to analyze the situation in “near” mode seems like it would be a very valuable tool in general (if anyone has any ideas, make a top level post!)
To give a couple examples, I recently talked to my roommate about the risks of AI, and on the surface he agreed it was a big deal. However, he didn’t make the connection “maybe cancer fundraisers aren’t the best way to spend my charity time”, and I don’t think he’ll actually do anything differently.
I talked to another friend about the same thing, and it scared the living crap out of him. He asked “How can you go on living like normal instead of working to fix it!?”. So far, so good. He looked like he was using the “near” method of thinking about it.
The catch though is that his conclusion was “Since there’s a small probability of me making the difference, I’d prefer to ‘stick my head in the sand’ and forget I heard this.”. This might actually be the rational response for someone that 1) doesn’t care about more than a small group of people, and 2) defects on true PD.
To recruit people like this it seems like we’d need to turn it into an iterated prisoners dilemma. If you caught as much flak for not donating to the FAI cause as you do for not recycling, then a lot more people would donate at least something.
that thought has occurred to me too… but I guess ignoring that voice is part of what makes (those of us who do) into something slightly more than self interested little apes.
If “AI will be dangerous to the world” became a socially accepted factoid you would get it spilling over in all sorts of unintended fashions. It might not be socially acceptable to use Wolfram Alpha as it is too AI-ish,
It already is a socially accepted factoid. People are afraid of AI for no good reason. (As for Wolfram Alpha, it’s at about the same level as ALICE. I’m getting more and more convinced that Stephen Wolfram has lost it...)
The simplest way to change public opinion is manually. Skynet seems like an adequate solution to me.
The biggest problem with the movies, besides the inconsistencies as to whether causality is changeable or not, is why Skynet bothers dealing with the humans once it’s broken their ability to prevent it from launching itself into space. Sending a single self-replicating seed factory to the Moon is what a reasonable AI would do.
The Terminator movies exploit the primal human fear of being exterminated by a rival tribe, putting AI in the role once filled by extraterrestrials: outsiders with great power who want to destroy all of ‘us’. The pattern is tedious and predictable.
As William has pointed out, AI running amok is already a standard trope. In fact, Asimov invented his three laws way back when as a way of getting past the cliche, and writing stories where it wasn’t a given that the machine would turn on its creator. But the cliche is still alive and well. Asimov himself had the robots taking over in the end, in “That Thou Art Mindful of Him” and the prequels to the “Foundation” trilogy.
The people that the world needs to take FAI seriously are the people working on AI. That’s what, thousands at the most? And surely they have all heard of the issue by now. What is their view on it?
I’ve got the February issue of the IEEE Transactions on Pattern Analysis and Machine Intelligence lying on my coffee table. Let’s evesdrop on what the professionals are up to
Offline loop investigation for handwriting analysis
Robust Face Recognition via Sparse Representation
Natural Image Statistics and Low-Complexity Feature Selection
An analysis of Ensemble Pruning Techniques Based on Ordered Aggregation
Geometric Mean for Subspace Selection
Semisupervised Learning of Hidden Markov Models via a Homotopy Method
Outlier Detection with the Kernelized Spatial Depth Function
Time Warp Edit Distance with Stiffness Adjustment for Time Series Matching
Framework for Performance Evaluation of Face, Text, and Vehicle Detection and Tracking in Vido: Data, Metrics, and Protocol
Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation
Principal Angles separate Subject Illumination spaces in YDB and CMU-PIE
High-precision Boundary Length Estimation by Utilizing Gray-Level Information
Statistical Instance-Based Pruning in Ensembles of Independent Classifiers
Camera Displacement via Constrained Minimization of the Algebraic Error
High-Accuracy and Robust Localization of Large Control Markers for Geometric Camera Calibration
These researchers are writing footnotes to Duda and Hart. They are occupying the triple point between numerical methods, applied mathematics, and statistics. It is occassionally lucrative. It paid my wages when I was applying these techniques to look down capability for pulse doppler radar.
The basic architecture of all this research is that the researchers have a monopoly on thinking, mathematics, and writing code and the computers crunch the numbers, both during research and later in a free standing but closed application. There is nothing foomy here.
As steven0461 has pointed out, this may well make it less likely to be taken seriously.
Knowing about FAI might lead people concerned with existential risk, or more generally futurism or doing the maximally good thing, to become newly interested in AI. (It worked for me.)
Nope, almost no-one in my AI research department has heard of the issue.
Furthermore, in order for people to be funded to do research on FAI, the people who hold the purse strings have to think it is important. Since politicians are elected by Joe public, you have to make Joe public understand.
As people’s preferences and decisions are concerned, there are trusted tools, trusted analysis methods. People use them, because they know indirectly that their output will be better than what their intuitive gut feeling outputs. And thus, people use complicated engineering practices to build bridges, instead of just drawing a concept and proceeding to fill the image with building material.
But these tools are rarely used to refactor people’s minds. People may accept conclusions chosen by experts, and allow them to install policies, as they know this is a prudent thing to do, but at the same time they may refuse to accept the fundamental assumptions on which the experts’ advice is based. This is non-technical perspective, Pirsig’s “romantic” attitude.
The only tools that get adopted are those that demonstrate practical results, independently of their internal construction. Adoption of new tools doesn’t change minds, and thus doesn’t allow to adopt new tools that were refused before. This is a slower process, driven more by social norm than support by the tool of reason, too technical to become part of people’s minds, to help in their decisions.
Argument will only convince smart people who are strong enough in rationality to allow their reason to reshape their attitudes. For others, the tools of blind social norm and dark arts are the only option, where practical effect appears only in the endgame.
If dark arts are allowed, it certainly seems like hundreds of millions of dollars spent on AI-horror movies like Terminator are a pretty good start. Barring an actual demostration of progress toward AI, I wonder what could actually be more effective...
Sometime reasonably soon, getting real actual physical robots into the uncanny valley could start to help. Letting imagination run free, I imagine a stage show with some kind of spookily-competent robot… something as simple as competent control of real (not CGI) articulated robots would be rather scary… for example, suppose that this robot does something shocking like physically taking a human confederate and nailing him to a cross, blood and all. Or something less gross, heh.
interesting. I wouldn’t want to rule out the “dark arts” , i.e. highly non rational methods of persuasion.
Robotics is not advanced enough for a robot to look scary, though military robotics is getting there fast.
A demonstration involving the very latest military robots could have the intended effect in perhaps 10 years.
...
There’s a difference between a direct lie and not-quite rational persuasion. I wouldn’t tell a direct lie about this kind of thing. Those people who would most be persuaded by a gory demo of robots killing people aren’t clever enough to research stuff on the net.
What’s “rational persuation”, anyway? Is a person supposed to already possess an ability to change their mind according to an agreed-to-be-safe protocol? Teaching rationality and then giving your complex case would be more natural, but isn’t necessarily an option.
The problem is that it’s possible to persuade that person of many wrong things, that the person isn’t safe from falsity. But if whatever action you are performing causes them to get closer to the truth, it’s a positive thing to do in their situation, one selected among many negative things that could be done and that happen habitually.
You know, sci-fi that took the realities of mindspace somewhat seriously could be helpful in raising the sanity waterline on AGI; a well-imagined clash between a Friendly AI and a Paperclipper-type optimizer (or just a short story about a Paperclipper taking over) might at least cause readers to rethink the Mind Projection Fallacy.
Won’t work, the clash will only happen in their minds (you don’t fight a war if you know you’ll lose; you can just proceed directly to the final truce agreement). Eliezer’s Three Worlds Collide is a good middle ground, with non-anthropomorphic aliens of human-level intelligence allowing to describe familiar kind of action.
IAWYC, but one ingredient of sci-fi is the willingness to sacrifice some true implications if it makes for a better story. It would be highly unlikely for a FAI and a Paperclipper to FOOM at the same moment with comparable optimization powers such that each thinks it gains by battling the other, and downright implausible for a battle between them to occur in a manner and at a pace comprehensible to the human onlookers; but you could make some compelling and enlightening rationalist fiction with those two implausibilities granted.
Of course, other scenarios can come into play. Has anyone even done a good Paperclipper-takeover story? I know there’s sci-fi on ‘grey goo’, but that doesn’t serve this purpose: readers have an easy time imagining such a calamity caused by virus-like unintelligent nanotech, but often don’t think a superhuman intelligence could be so devoted to something of “no real value”.
I’ve seen some bad ones~:
http://www.goingfaster.com/term2029/skynet.html
That’s… the opposite of what I was looking for. It’s pretty bad writing, and it’s got the Mind Projection Fallacy written all over it. (Skynet is unhappy and worrying about the meaning of good and evil?)
yeah, like I said, it is pretty bad. But imagine rewriting that story to make it more realistic. It would become:
Ironically, a line from the original Terminator movie is a pretty good intuition pump for Powerful Optimization Processes:
It can’t be bargained with. It can’t be ‘reasoned’ with. It doesn’t feel pity or remorse or fear and it absolutely will not stop, ever, until [it achieves its goal].
Shakey the Robot was funded by DARPA; according to my dad, the grant proposals were usually written in such a way as to imply robot soldiers were right around the corner...in 1967. So it only took about 40 years.
My sense is that most people aren’t concerned about Skynet for the same reason that they’re not concerned about robots, zombies, pirates, faeries, aliens, dragons, and ninjas. (Homework: which of those are things to worry about, and why/why not?)
Also, this article could do without the rant against environmentalism and your roommate. Examples are useful to understanding one’s main point, but this article seems to be overwhelmed by its sole example.
It often seems to be the case that some real-world possibility gets reflected in some fictional trope with somewhat different properties, and then when the fictional trope gets more famous you can no longer talk about the real-world possibility because people will assume you must be a stupid fanperson mistaking the fictional trope for something real. Very annoying.
Also, most people haven’t seen any of the Terminator movies or TV series. And most people have never thought about the possibility of recursively self-improving AI. But these might be in a sense only trivially true.
Thanks, Thom, I’ve edited my article to take into account this critique
You could probably find a clearer title. Naming an article after an example doesn’t seem like a good idea to me. Probably the topic changed while you were writing and you didn’t notice. (I claim that it is a coherent essay on public opinion.)
Yes, it is important to know how public opinion changes. But before you try to influence it, you should have a good idea of what you’re trying to accomplish and whether it’s possible. Recycling and unplugging gadgets are daily activities. That continuity is important to making them popular. Is it possible to make insulating houses fashionable?
Thanks, I’ve implemented this
Blah, my name is “Anissimov”, please spell it correctly!
“The Internet” is probably an interesting case study. It has grown from a very small niche product into a “fundamental right” in a relatively short time. One of the things that probably helped this shift is showing people what the internet could do for them—it became useful. This is understandably a difficult point on which to sell FAI.
Now that that surface analogy is over, how about the teleological analogy? In a way, environmentalism assumes the same mantle as FAI—“do it for the children”. Environmentalism has plenty of benefits over FAI—it has fuzzier mascots and more eminent problems—Terminators aren’t attacking, but more and more species are becoming extinct.
Environmentalism is still of interest here through the subtopic of climate change. Climate change already deals with some of the problems existential risk at large deals with—its veracity is argued, its importance is disputed, and the math is poorly understood. The next generation serves as a nice fuzzy mascot and the danger is of the dramatically helpful ever inexorably closer variety. Each day you don’t recycle, the earth is in more danger, &c. (The greater benefit of a creeping death, “zombie” danger may be that it negates the need for a mathematical understanding of the problem. It becomes “obvious” that the danger is real if it gets closer everyday.)
How can you convince people to solve a harder problem once, rather than every problem that crops up?
One central problem is that people are constantly deluged with information about incipient crises. The Typical Person can not be expected to understand the difference in risk levels indicated by UFAI vs. bioterror vs. thermonuclear war vs global warming, and this is not even a disparagement of the Typical Person. These risks are just impossible to estimate.
But how can we deal with this multitude of potential disasters? Each disaster has some low probability of occurring, but because there are so many of them (swine flu, nuclear EMP attacks, grey goo, complexity… ) we are almost certainly doomed, unless we do something clever. Even if we take preventative measures sufficient to eliminate the risk of one problem (presumably at some enormous expense), we will just get smashed by the next one on the list.
Meta-strategy: find strategies that help defend against all sources of existential risk simultaneously. Candidates:
moon base
genetic engineering of humans to be smarter and more disease-resistant
generic civilizational upgrades, e.g. reducing traffic and improving the economy
simplification. There is no fundamental reason why complexity always has to increase. Almost everything can be simplified: the law, the economy, software.
I agree that an Unfriendly AI could be a complete disaster for the human race. However, I really don’t expect to see an AI that goes FOOM during my lifetime. To be frank, I think I’m far more likely to be killed by a civilization-threatening natural disaster, such as an asteroid impact, supervolcano eruption, or megatsunami, than by an Unfriendly AI. As far as I’m concerned, worrying about Unfriendly AI today is like worrying about global warming in 1862, shortly after people began producing fuel from petroleum. Yes, it’s a real problem that will have to be solved—but the people alive today aren’t going to be the ones that solve it.
Why is this post being voted negative? It’s an important problem for plenty of causes of interest to many rationalists, and is well worth discussing here.
Agreed, but the part about environmentalism seems like a mindkill magnet that would have been better left out. If you ask me, the recent discussions about libertarianism and gender already represented a dangerous slide in the wrong direction.
Politics is one thing, but if we can’t discuss the arithmetic of environmentalism, we should give up.
We can discuss the arithmetic of environmentalism, but we should avoid speaking of positions (like environmentalism) that significant numbers of people here are likely to identify with in terms of “propaganda” making people do “retarded” things, especially when this is tangential to the main point, and even when (minus some of the connotations) this is accurate.
“Propaganda” is a fraught term, but it is important to think about these things in terms of the irrational propagation of memes. I think that was the main point, but the title didn’t make it clear. The coining of the phrase “reduce, reuse, recycle” was an act of propaganda, but it wasn’t good enough: it didn’t lead to reduction or reuse. It is important to know the failure modes. (Or maybe it was a great idea, promoting recycling as “the least I can do,” definitely falling short of ideal, but only if it was not possible to push it further.)
Maybe it would be easier to discuss an example where I agree with the goal:
The rate of car fatalities in the US has dramatically decreased over the past 50 years, seemingly due to government propaganda for seat belts and against drunk driving. Partly this has been about influencing individuals, but it seems to have changed the social acceptability of drunk driving, which was surely a more effective route.
Yup. Post has been edited to take this into account.
The importance of a topic doesn’t give a free pass to posts on it.
Maybe it would be better if important topics were held to higher standards, but that sounds hard to implement because there’s too much to communicate about the standards. Voting certainly doesn’t communicate it. In particular, I fear that people would hesitate to publish adequate posts.
Instead, post early and often.