Applause Lights
At the Singularity Summit 2007, one of the speakers called for democratic, multinational development of artificial intelligence. So I stepped up to the microphone and asked:
Suppose that a group of democratic republics form a consortium to develop AI, and there’s a lot of politicking during the process—some interest groups have unusually large influence, others get shafted—in other words, the result looks just like the products of modern democracies. Alternatively, suppose a group of rebel nerds develops an AI in their basement, and instructs the AI to poll everyone in the world—dropping cellphones to anyone who doesn’t have them—and do whatever the majority says. Which of these do you think is more “democratic,” and would you feel safe with either?
I wanted to find out whether he believed in the pragmatic adequacy of the democratic political process, or if he believed in the moral rightness of voting. But the speaker replied:
The first scenario sounds like an editorial in Reason magazine, and the second sounds like a Hollywood movie plot.
Confused, I asked:
Then what kind of democratic process did you have in mind?
The speaker replied:
Something like the Human Genome Project—that was an internationally sponsored research project.
I asked:
How would different interest groups resolve their conflicts in a structure like the Human Genome Project?
And the speaker said:
I don’t know.
This exchange puts me in mind of a quote from some dictator or other, who was asked if he had any intentions to move his pet state toward democracy:
We believe we are already within a democratic system. Some factors are still missing, like the expression of the people’s will.
The substance of a democracy is the specific mechanism that resolves policy conflicts. If all groups had the same preferred policies, there would be no need for democracy—we would automatically cooperate. The resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an artificial intelligence, but it has to be something. What does it mean to call for a “democratic” solution if you don’t have a conflict-resolution mechanism in mind?
I think it means that you have said the word “democracy,” so the audience is supposed to cheer. It’s not so much a propositional statement or belief, as the equivalent of the “Applause” light that tells a studio audience when to clap.
This case is remarkable only in that I mistook the applause light for a policy suggestion, with subsequent embarrassment for all. Most applause lights are much more blatant, and can be detected by a simple reversal test. For example, suppose someone says:
We need to balance the risks and opportunities of AI.
If you reverse this statement, you get:
We shouldn’t balance the risks and opportunities of AI.
Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information.
There are plenty of legitimate reasons for uttering a sentence that would be uninformative in isolation. “We need to balance the risks and opportunities of AI” can introduce a discussion topic; it can emphasize the importance of a specific proposal for balancing; it can criticize an unbalanced proposal. Linking to a normal assertion can convey new information to a bounded rationalist—the link itself may not be obvious. But if no specifics follow, the sentence is probably an applause light.
I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing:
I am here to propose to you today that we need to balance the risks and opportunities of advanced artificial intelligence. We should avoid the risks and, insofar as it is possible, realize the opportunities. We should not needlessly confront entirely unnecessary dangers. To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm. We should respect the interests of all parties with a stake in the Singularity. We must try to ensure that the benefits of advanced technologies accrue to as many individuals as possible, rather than being restricted to a few. We must try to avoid, as much as possible, violent conflicts using these technologies; and we must prevent massive destructive capability from falling into the hands of individuals. We should think through these issues before, not after, it is too late to do anything about them . . .
- Taboo “Outside View” by 17 Jun 2021 9:36 UTC; 350 points) (
- Beyond the Reach of God by 4 Oct 2008 15:42 UTC; 246 points) (
- An Alien God by 2 Nov 2007 6:57 UTC; 213 points) (
- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 203 points) (
- A Parable On Obsolete Ideologies by 13 May 2009 22:51 UTC; 185 points) (
- Taboo “Outside View” by 17 Jun 2021 9:39 UTC; 177 points) (EA Forum;
- We all teach: here’s how to do it better by 30 Sep 2022 2:06 UTC; 172 points) (EA Forum;
- Comments on OpenAI’s “Planning for AGI and beyond” by 3 Mar 2023 23:01 UTC; 148 points) (
- 20 May 2022 12:01 UTC; 145 points) 's comment on “Big tent” effective altruism is very important (particularly right now) by (EA Forum;
- Four mindset disagreements behind existential risk disagreements in ML by 11 Apr 2023 4:53 UTC; 136 points) (
- Comments on OpenAI’s “Planning for AGI and beyond” by 3 Mar 2023 23:01 UTC; 115 points) (EA Forum;
- Where I Am Donating in 2024 by 19 Nov 2024 0:09 UTC; 113 points) (EA Forum;
- Should ethicists be inside or outside a profession? by 12 Dec 2018 1:40 UTC; 97 points) (
- Of Two Minds by 17 May 2018 4:34 UTC; 94 points) (
- Intellectual insularity and productivity by 11 Jun 2012 15:10 UTC; 80 points) (
- Prolegomena to a Theory of Fun by 17 Dec 2008 23:33 UTC; 67 points) (
- Raised in Technophilia by 17 Sep 2008 2:06 UTC; 67 points) (
- Four mindset disagreements behind existential risk disagreements in ML by 11 Apr 2023 4:53 UTC; 61 points) (EA Forum;
- Clarifying Your Principles by 1 Oct 2022 21:20 UTC; 60 points) (
- 31 Aug 2011 6:23 UTC; 59 points) 's comment on Scientifically optimizing education: Hard problem, or solved problem? Introducing the Theory of Direct Instruction by (
- What specific changes should we as a community make to the effective altruism community? [Stage 1] by 5 Dec 2022 9:04 UTC; 58 points) (EA Forum;
- Are Deontological Moral Judgments Rationalizations? by 16 Aug 2011 16:40 UTC; 52 points) (
- CFAR’s new focus, and AI Safety by 3 Dec 2016 18:09 UTC; 51 points) (
- Truth: It’s Not That Great by 4 May 2014 22:07 UTC; 50 points) (
- 14 Oct 2007 5:11 UTC; 47 points) 's comment on Original Seeing by (
- Conspiracy Theories as Agency Fictions by 9 Jun 2012 15:15 UTC; 44 points) (
- Possible changes to EA, a big upvoted list by 18 Jan 2023 18:56 UTC; 43 points) (EA Forum;
- Play in Hard Mode by 26 Aug 2017 19:00 UTC; 43 points) (
- 25 Jul 2021 20:47 UTC; 39 points) 's comment on Working With Monsters by (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- 29 Apr 2024 17:00 UTC; 37 points) 's comment on Joining the Carnegie Endowment for International Peace by (EA Forum;
- 3 Dec 2014 4:40 UTC; 37 points) 's comment on Rationality Quotes December 2014 by (
- “Model UN Solutions” by 8 Dec 2023 23:06 UTC; 36 points) (
- [Valence series] 4. Valence & Social Status (deprecated) by 15 Dec 2023 14:24 UTC; 35 points) (
- A Case Study of Motivated Continuation by 31 Oct 2007 1:27 UTC; 35 points) (
- Narrow your answer space by 28 Dec 2010 11:38 UTC; 33 points) (
- [Link] Belief in religion considered harmful? by 17 Dec 2011 22:38 UTC; 33 points) (
- 30 Oct 2011 11:21 UTC; 32 points) 's comment on Politics is the Mind-Killer by (
- Interpersonal Morality by 29 Jul 2008 18:01 UTC; 28 points) (
- Fake Norms, or “Truth” vs. Truth by 22 Jul 2008 10:23 UTC; 28 points) (
- Boo lights: groupthink edition by 15 Feb 2010 18:29 UTC; 27 points) (
- Functional silence: communication that minimizes change of receiver’s beliefs by 12 Feb 2019 21:32 UTC; 27 points) (
- Agreeing With Stalin in Ways That Exhibit Generally Rationalist Principles by 2 Mar 2024 22:05 UTC; 26 points) (
- 18 Jan 2012 22:38 UTC; 25 points) 's comment on Leveling Up in Rationality: A Personal Journey by (
- Whither Moral Progress? by 16 Jul 2008 5:04 UTC; 24 points) (
- 2 Oct 2012 17:52 UTC; 23 points) 's comment on The Useful Idea of Truth by (
- Play in Easy Mode by 26 Aug 2017 18:57 UTC; 23 points) (
- 1 May 2012 11:02 UTC; 22 points) 's comment on Open Thread, May 1-15, 2012 by (
- 7 Apr 2011 22:31 UTC; 21 points) 's comment on [LINK] Ethical Pick-Up Artistry (Clarisse Thorn) by (
- 25 Aug 2024 7:07 UTC; 20 points) 's comment on CEA will continue to take a “principles-first” approach to EA by (EA Forum;
- Beyond the Reach of God, Abridged for Spoken Word by 6 Dec 2011 18:49 UTC; 19 points) (
- 8 Feb 2012 17:51 UTC; 19 points) 's comment on Rationality Quotes February 2012 by (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- You Have the Right to Think by 27 Nov 2017 2:10 UTC; 17 points) (
- 8 Sep 2014 14:43 UTC; 17 points) 's comment on “NRx” vs. “Prog” Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) by (
- 4 Oct 2011 0:38 UTC; 17 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 9 by (
- 4 Aug 2011 12:09 UTC; 17 points) 's comment on Martinenaite and Tavenier on cryonics by (
- 7 Nov 2010 21:29 UTC; 17 points) 's comment on Yet Another “Rational Approach To Morality & Friendly AI Sequence” by (
- Terminology Thread (or “name that pattern”) by 3 Jul 2014 11:47 UTC; 16 points) (
- 24 Nov 2012 9:22 UTC; 15 points) 's comment on Why is Mencius Moldbug so popular on Less Wrong? [Answer: He’s not.] by (
- Rationality Reading Group: Fake Beliefs (p43-77) by 7 May 2015 9:07 UTC; 14 points) (
- 19 Jul 2010 4:44 UTC; 13 points) 's comment on (One reason) why capitalism is much maligned by (
- 2 Jan 2012 11:45 UTC; 13 points) 's comment on Rationality quotes January 2012 by (
- 13 Mar 2021 17:55 UTC; 13 points) 's comment on On Changing Minds That Aren’t Mine, and The Instinct to Surrender. by (
- 12 May 2011 2:01 UTC; 12 points) 's comment on Holy Books (Or Rationalist Sequences) Don’t Implement Themselves by (
- 19 Mar 2011 2:07 UTC; 12 points) 's comment on Less Wrong NYC: Case Study of a Successful Rationalist Chapter by (
- 30 May 2019 19:03 UTC; 12 points) 's comment on OECD agenda [Link] by (
- 14 Apr 2009 3:34 UTC; 12 points) 's comment on Declare your signaling and hidden agendas by (
- 14 Oct 2011 4:39 UTC; 11 points) 's comment on Rationality Quotes October 2011 by (
- 3 Sep 2013 4:44 UTC; 11 points) 's comment on True Optimisation by (
- [SEQ RERUN] Applause Lights by 25 Aug 2011 2:04 UTC; 11 points) (
- 10 Nov 2021 0:12 UTC; 11 points) 's comment on Transcript for Geoff Anders and Anna Salamon’s Oct. 23 conversation by (
- 21 Feb 2023 11:35 UTC; 11 points) 's comment on Medlife Crisis: “Why Do People Keep Falling For Things That Don’t Work?” by (
- The time you have by 5 Jan 2017 2:13 UTC; 10 points) (
- 5 Mar 2012 8:36 UTC; 9 points) 's comment on Open Thread, March 1-15, 2012 by (
- Summarizing the Sequences Proposal by 4 Aug 2011 21:15 UTC; 9 points) (
- 28 Feb 2012 11:57 UTC; 9 points) 's comment on Mike Darwin on the Less Wrong intelligentsia by (
- 23 Jul 2009 10:15 UTC; 9 points) 's comment on The Price of Integrity by (
- 26 Aug 2011 8:00 UTC; 9 points) 's comment on [SEQ RERUN] Rationality and the English Language by (
- 29 Sep 2023 6:08 UTC; 9 points) 's comment on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem by (
- 8 Dec 2010 0:24 UTC; 9 points) 's comment on Best career models for doing research? by (
- Social Proficiency of a Rationalist and a Scholar by 21 May 2011 23:55 UTC; 9 points) (
- 27 May 2012 7:10 UTC; 8 points) 's comment on What is the best programming language? by (
- 5 May 2022 20:17 UTC; 8 points) 's comment on The Regulatory Option: A response to near 0% survival odds by (
- 6 Nov 2021 15:26 UTC; 8 points) 's comment on Speaking of Stag Hunts by (
- 20 Apr 2012 16:55 UTC; 8 points) 's comment on Logical fallacy poster by (
- Acting on your intended preferences—What does that look like in practice? (critical introspective questions) by 3 May 2017 0:51 UTC; 8 points) (
- 31 Jul 2022 16:17 UTC; 7 points) 's comment on EA in the mainstream media: if you’re not at the table, you’re on the menu by (EA Forum;
- Why do you downvote EA Forum posts & comments? by 29 May 2019 22:52 UTC; 6 points) (EA Forum;
- 6 Apr 2021 7:27 UTC; 6 points) 's comment on What are all these children doing in my ponds? by (
- 19 Nov 2010 7:37 UTC; 6 points) 's comment on Yes, a blog. by (
- Noble excuses by 13 Mar 2017 23:29 UTC; 5 points) (
- Working with multiple problems at once by 6 May 2017 22:14 UTC; 5 points) (
- 2 Dec 2013 0:54 UTC; 5 points) 's comment on A critique of effective altruism by (
- 7 Apr 2011 23:21 UTC; 5 points) 's comment on Meta: Karma and lesswrong mainstream positions by (
- 9 Oct 2012 4:22 UTC; 5 points) 's comment on Firewalling the Optimal from the Rational by (
- 17 May 2011 17:20 UTC; 5 points) 's comment on Rationality Boot Camp by (
- Lost purposes—Doing what’s easy or what’s important by 4 Jan 2017 1:26 UTC; 5 points) (
- 23 Jun 2020 6:42 UTC; 4 points) 's comment on Eevee’s Quick takes by (EA Forum;
- 22 Mar 2013 16:32 UTC; 4 points) 's comment on Don’t Get Offended by (
- 27 Aug 2012 9:18 UTC; 4 points) 's comment on LessWrong could grow a lot, but we’re doing it wrong. by (
- 12 Nov 2008 19:02 UTC; 4 points) 's comment on Worse Than Random by (
- 25 Dec 2009 16:50 UTC; 4 points) 's comment on Playing the Meta-game by (
- 21 May 2011 20:16 UTC; 4 points) 's comment on Metacontrarian Metaethics by (
- [LINK] Creationism = High Carb? Or, The Devil Does Atkins by 11 Nov 2010 3:41 UTC; 4 points) (
- 19 Jan 2021 22:09 UTC; 4 points) 's comment on Debate Minus Factored Cognition by (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- 8 Oct 2022 22:58 UTC; 4 points) 's comment on 7 traps that (we think) new alignment researchers often fall into by (
- 15 May 2011 20:47 UTC; 4 points) 's comment on People who want to save the world by (
- 20 Aug 2020 3:18 UTC; 4 points) 's comment on The Wrong Side of Risk by (
- 10 Dec 2019 19:47 UTC; 4 points) 's comment on G Gordon Worley III’s Shortform by (
- 4 Mar 2013 17:04 UTC; 4 points) 's comment on Open Thread, March 1-15, 2013 by (
- 30 Mar 2023 19:53 UTC; 4 points) 's comment on Arguing all sides with ChatGPT by (
- 16 Feb 2012 14:59 UTC; 3 points) 's comment on The Future of Education by (
- 2 May 2023 15:51 UTC; 3 points) 's comment on A[I] Zombie Apocalypse Is Already Upon Us by (
- 19 Jun 2021 14:07 UTC; 3 points) 's comment on Taboo “Outside View” by (
- 11 Aug 2011 10:28 UTC; 3 points) 's comment on Rationality Quotes August 2011 by (
- 7 Nov 2012 7:18 UTC; 3 points) 's comment on Please don’t vote because democracy is a local optimum by (
- 21 May 2009 18:09 UTC; 3 points) 's comment on Catchy Fallacy Name Fallacy (and Supporting Disagreement) by (
- 17 Dec 2014 19:29 UTC; 3 points) 's comment on Taboo Your Words by (
- 11 Sep 2010 18:34 UTC; 3 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 20 Mar 2012 11:36 UTC; 3 points) 's comment on Rationality Quotes February 2012 by (
- 11 Aug 2022 15:40 UTC; 2 points) 's comment on Let’s not glorify people for how they look. by (EA Forum;
- 5 Jun 2023 8:11 UTC; 2 points) 's comment on Do You Really Want Effective Altruism? by (
- 28 Oct 2014 0:02 UTC; 2 points) 's comment on Open thread, Oct. 27 - Nov. 2, 2014 by (
- 2 Sep 2012 19:47 UTC; 2 points) 's comment on Rationality Quotes September 2012 by (
- 30 Mar 2018 21:35 UTC; 2 points) 's comment on Hufflepuff Cynicism on Hypocrisy by (
- An easier(?) end to the electoral college by 19 Dec 2022 2:09 UTC; 2 points) (
- 2 Nov 2022 21:40 UTC; 2 points) 's comment on Prolegomena to a Theory of Fun by (
- 22 Feb 2013 0:55 UTC; 2 points) 's comment on Why Politics are Important to Less Wrong... by (
- 9 Apr 2022 18:48 UTC; 1 point) 's comment on Encouraging epistemic modesty, scout mindset and generating new ideas, should be considered among the most effective things people can do for world peace. by (EA Forum;
- 23 Feb 2019 19:35 UTC; 1 point) 's comment on The Case for a Bigger Audience by (
- 5 Dec 2009 20:50 UTC; 1 point) 's comment on Arbitrage of prediction markets by (
- Zero Agents and Plastic Men by 16 Sep 2016 7:51 UTC; 1 point) (
- 19 Apr 2018 18:27 UTC; 1 point) 's comment on Sentience by (
- 17 May 2010 23:21 UTC; 1 point) 's comment on Multiple Choice by (
- 22 Nov 2007 2:51 UTC; 1 point) 's comment on Artificial Addition by (
- 10 May 2010 16:03 UTC; 1 point) 's comment on A survey of anti-cryonics writing by (
- 24 Jul 2012 17:42 UTC; 1 point) 's comment on Imperfect Voting Systems by (
- 12 Sep 2011 0:24 UTC; 0 points) 's comment on Is That Your True Rejection? by Eliezer Yudkowsky @ Cato Unbound by (
- 2 Jul 2011 13:49 UTC; 0 points) 's comment on Why not just write failsafe rules into the superintelligent machine? by (
- 9 Nov 2011 2:53 UTC; 0 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
- 11 Jan 2013 12:26 UTC; 0 points) 's comment on Closet survey #1 by (
- 28 Feb 2010 11:12 UTC; 0 points) 's comment on Great Product. Lousy Marketing. by (
- 15 Dec 2013 7:29 UTC; 0 points) 's comment on ‘Effective Altruism’ as utilitarian equivocation. by (
- 16 Jul 2012 15:24 UTC; 0 points) 's comment on Morality open thread by (
- 16 Jul 2012 15:25 UTC; 0 points) 's comment on Morality open thread by (
- 16 Jul 2012 15:24 UTC; 0 points) 's comment on Morality open thread by (
- 16 Sep 2012 23:25 UTC; 0 points) 's comment on The raw-experience dogma: Dissolving the “qualia” problem by (
- 18 Mar 2009 18:17 UTC; -1 points) 's comment on Rational Me or We? by (
- 20 Nov 2010 2:36 UTC; -1 points) 's comment on Existential Risk and Public Relations by (
- 2 May 2010 23:54 UTC; -2 points) 's comment on Open Thread: May 2010 by (
- 8 Nov 2009 20:57 UTC; -2 points) 's comment on The Danger of Stories by (
- 12 Dec 2014 4:29 UTC; -2 points) 's comment on A bit of word-dissolving in political discussion by (
- 25 Oct 2011 12:20 UTC; -5 points) 's comment on Introduction: “Acrohumanity” by (
- 1 Apr 2019 21:52 UTC; -8 points) 's comment on Experimental Open Thread April 2019: Socratic method by (
- 6 Jan 2013 0:43 UTC; -9 points) 's comment on [LINK] Why taking ideas seriously is probably a bad thing to do by (
You have, I think, come upon the essence of modern political speeches.
I was going to say this as well. Your last paragraph here is like every presidential speech that I’ve ever watched.
The democracy booster probably meant that people with little political power should not be ignored. And that’s not an empty statement; people with little political power are ignored all the time.
Actually, that seems to be an extremely empty statement. “Having little political power” seems to imply, and is implied by, “being ignored”. I wouldn’t doubt that the two predicates are coextensive. Since people with little political power are, by definition, ignored; saying that people with little political power should not be ignored makes as much sense as saying that squares should be circular.
But maybe I’m not being very charitable here. You can make the shape that was once square more circular, only as long as you note that the shape isn’t a square anymore. Similarly, people with little political power can, over time, gain more political power, which is a positive thing. But even if everyone has an equal amount of political power, the proposition that “people with little political power are ignored” would still be true, even if the predicates contain the null set.
I disagree.
Even if your interpretation of these terms were accurate, “the elements of this set should (in the future) not be elements of this set” isn’t an empty statement.
Second, a benevolent dictator (or, say, an FAI) could certainly advance the interests of a group with absolutely no say in what said dictator does.
Eeek, I think the differences in interpretations are due to the de re / de dicto distinction.
Compare the following translations of the statement “people without political power should not be ignored.”
De dicto: “It should not be the case that any person without political power is also a person who is ignored.”
De re: “If there is a person without political power, then that person should not be ignored.”
If the two predicates in the de re interpretation (“person without political power” and “person who is ignored”) are coextensive, and thus equivalent, we should be able to substitute like terms and derive “If there is a person without political power, then that person should not be without political power.” Given that I wanted to use the more charitable interpretation, this is the interpretation I should use, and so you’re correct :)
But look what happens to the de dicto interpretation when you substitute like terms. It turns into “It should not be the case that a person without political power is a person without political power.” This is the sort of thing I was objecting to, to begin with. But it was the wrong interpretation, and thus my error.
(Yeah, I decided to go into an extensive analysis here mainly to refine my logic skills and in case anyone else is interested. Mathematicians, I suppose, would probably not have studied the de re / de dicto distinction; mainly because I don’t see much relevance to mathematics.)
How is that de re and de dicto?
Huh! Thanks for the thorough analysis :) I’d say the most likely intent behind the statement is that people with direct political power should use it for the benefit of those without direct political power—i.e. elected officials and so forth should provide support for minority groups without much voting power. In which case your initial thought that they intended a “de dicto” reading could be right!
Did I tip my hand about being a mathematician by mentioning set theory? ;)
Alas, for most audiences I think you would find no one laughing even after an entire applause light speech.
Yeah, but you’d get lots of applause!
I tried this for my valedictoral speech and I gave up after about 15 seconds due to the laughter.
My preferred method is to use long sentences, to speak slowly and seriously, with great emphasis, and to wave my hands in small circles as I speak. If you don’t speak to this audience regularly, it is also a good idea to emphasise how grateful you are to be asked to speak on such an important occasion (and it is a very important occasion...). You get bonus points for using the phrase “just so chuffed”, especially if you use it repeatedly (a technique I learned from my old headmaster, who never expressed satisfaction in any other way while giving speeches).
I also recommend this technique, this way of speaking, to anyone who wishes to wind up, by which I mean annoy or irritate, a family member. It’s quite effective when used consistently, even if you only do it for a minute or two. Don’t you agree?
Evidence: any graduation speech I’ve ever been subject to.
I remember at the AGIRI workshop in DC last year, Alexei Samsonovich talked about sorting a list of English words along two dimensions—“valence” and “arousal,” indicating some component of the emotional response which words evoke.
Maybe audiences respond to speeches by summing the emotion vectors of each word in the speech, rather than parsing sentences.
Quick test: who here is excited by the prospects of anthropic quantum computing?
What I find interesting is that there are some obvious parallels between applause lights and Barnum statements—so named after P.T. Barnum.
Barnum statements are essentially statements which anyone can apply to themselves as true, which essentially say nothing, and which feel unique to each individual hearing themselves described that way.
Barnum statements are a stock-in-trade of cold-readers such as mentalists and psychics. It seems to me that applause lights are nothing more than the abstract, impersonal version of the same phenomena; or perhaps the same phenomena used in a rhetorical and ideological application.
Seems like Barnum states and applause lights are both vague, hidden tautologies
I’d think that it came out of a random abstract generator like snarxiv.
anthropic quantum computing? if i were flipping through the channels and heard that phrase uttered by someone who looked like he was giving a speech, i would be immediate interested in learning more and would definitely stay on the channel. I have no idea what the phrase means, but my immediate guesses are indeed exciting.
Such speech could theoretically perform “bringing to attention” function. Chunks of “bringing to attention” are equivalent to any kind of knowledge, it’s just an inefficient form, and abnormality of that speech in its utter inefficiency, not lack of content. People can bear such talk as similar inefficiency can be present in other talks in different form. Inefficiency makes it much simpler to obfuscate eluding certain topics.
I’m pretty sure that many people and organizations routinely DO argue that “we shouldn’t balance the risks and opportunities of X”. In ethics, deontological systems claim this. In policy, environmentalists are the first example that spring to mind, though they have been getting substantially better in the last few years. Radical pacifists like Gandhi have often been praised for asserting that people should not balance the risks and opportunities of war. More broadly, display of this attitude seems to me to be necessary for anyone who is attempting to portray that they are extraordinarily “virtuous” as virtue is normally understood, at least in our broadly Christian derived civilization. I actually think that it would be a good idea to try presenting all applause lights, but I think that it has been done. “The Gentle Art of Verbal Self Defense” claims in the appendix that such a speech has been written and presented to applause on a variety of topics. It seems to me though that the speech you were proposing above was actually an endorsement of a reasonable set of meta-policies which are in fact generally not engaged in, and was thus substantive, not empty, so I’m not sure it counts.
Statements of the sort “we shouldn’t balance the risks and opportunities of X” are substantive only where X is closely related to a fundamental principle or a terminal goal. Since nobody really wants superhuman AGI for its own sake (in fact, it’s just the opposite: it’s the ultimate instrumental goal), “we should balance the risks and opportunities of AGI” is an applause light.
David’s comment that we shouldn’t ignore people with little political power is a bit problematic. People who are not ignored in a political process have by definition some political power; whoever is ignored lacks power. So the meaning becomes “people who are ignored are ignored all the time”. The only way to handle it is to never ignore anybody on anything. So please tell me your views on whether Solna muncipality in Sweden should spend more money on the stairs above the station, or a traffic light—otherwise the decision will not be fully democratic.
I wonder if the sensitivity for applause lights is different in different cultures. When I lectured in Madrid I found mine and several friend’s speeches fall relatively flat, despite being our normally successful “standard speeches”. But a few others got roaring responses at the applause lights—we were simply not turning them on brighly enough. The reward of a roaring applause is of course enough to bias a speaker to start pouring on more applause lights.
Hmm, was my use of “bias” above just an applause light for Overcoming Bias?
I dont think your ‘bias’ usage is an applause light, even though the reverse of the state is abnormal.
The reason being that this state is a predictive statement and not a moral statement.
Perhaps a better word would be “train”.
Eliezer’s nothing-but-applause-lights speech sounds strangely like every State of the Union address I’ve ever heard...
See also Trust Cues.
When I click that link, my browser downloads a file called redirect.php.
Rather than just “applause lights”, sloganeering often is a cue to group-identification. Cf. postmodern text generators.
This post reminds me of George Orwell’s essay “Politics and The English Language”.
“The democracy booster probably meant that people with little political power should not be ignored. And that’s not an empty statement; people with little political power are ignored all the time.”
But isn’t it precisely the people with little political power who can most safely be ignored?
In standard democracy, yes, that is the case.
Perfect democracy is pure majority rule. Through history we have learned that this is probably the worst possible idea for a form of government. The mob has no concern for those who are not in the mob, and the apathy of the crowd can lead to some horrific consequences for those in the minority.
This is why most democracies are not really democracies, but have strong constraints that boost the power of the weakest members to prevent them from being overruled on every decision, while still giving the majority the larger share of the power.
For example, in the US the democratic process is split between two houses, The House of Representatives, which is population based and represents majority rule, and The Senate, for which each state gets only two representatives regardless of population. That balances the power while still giving the majority the majority of the power.
It’s constraints similar to this (everyone does it differently, the point is that you always need to do it) that allow democratically based systems to work. In the US we also put in a president to make sure things get done, and then went as far outside the democratic system as the founders were comfortable with to install the third constraint on the system—the courts.
It could work just fine if there were plenty of well thought out constraints on it, but “democracy” by itself probably would not work at all; it rarely ever does. Therefore, saying “democracy” without any intention of discussing it is clearly just an applause word. Either that, or the man was totally ignorant. Leave it to someone like that to require the absolute destruction of a major effort like AGI just to learn the pitfalls of democracy that have been learned over and over and over again.
But that in now way implies that they should be ignored.
It at least to some extent implies that they should be ignored. To illustrate:
Someone who is has great political power should not be ignored. This statement is not vacuous; it is instead making a worthwhile statement of fact. Given that, we know that people who do not have great political power should be ignored to a greater extent than people who do have great political power. Thus, that one does not have great political power (at least weakly) implies that one should be ignored (ceteris paribus). This contradicts the claim “That in no way implies that they should be ignored” (emphasis added).
As a side note, the comment you’re responding to was left in 2007, and even on a different website. As a general rule, unless you’re making a significant contribution, it’s not worth responding to comments that were left before 2009.
If you do believe the parent comment is a worthwhile contribution, I’d suggest correcting “now” to “no” (assuming that’s what you meant).
Curiously Eliezer, I feel like applauding. Good post.
Eliezer,
Thank you for the quotation:
“We believe that we are already living in a democracy, although some factors are still missing, such as the expression of the people’s will”
I hope someone can tell us who said it.
John
It might not convey information, but I bet you could get thunderous applause. Often, the latter outweighs the former when it comes t the goals of a speech.
link to 1981 Time magazine interview with the president of Argentina—source of Eliezer’s quote about democracy absent the people’s will.
http://www.time.com/time/magazine/article/0,9171,954853,00.html?promoid=googlep
This is not a particularly well-defined notion; clearly it does not resonate with you, who want a stricter definition. But it is hardly a meaningless notion, either. It is not an applause sign.
It is also, I think, a much more useful concept than you seem to have in mind. You are hung up on specifics: “the resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an AI, but it has to be something.” Yes, in any actual project for developing AI, it would have to be something, and something specific. But specifically which of these methods (or an infinity of other specific implementations of “democracy”) did not matter to the speaker you refer to.
But what is really that it didn’t MATTER, or simply that he didn’t KNOW?
I think it was the latter—what’s more, it didn’t even occur to him to ask the question. He seemed to think that saying “democratic” was enough.
I know where your quote came from: http://www.time.com/time/magazine/article/0,9171,954853,00.html?promoid
It’s from “President Roberto Eduardo Viola, formerly Argentina’s army commander in chief”.
It’s an answer to the first question in the interview:
“Q. How soon do you expect Argentina to be returned to democratic government?
A. We believe we are already within a democratic system. Some factors are still missing, like the expression of the people’s will, but nevertheless we still think we are within a democracy. We say so because we believe these two fundamental values of democracy, freedom and justice, are in force in our country. There are, it is true, several conditioning aspects as regards political or union activity, but individual freedom is nowhere infringed in an outstanding manner.”
BTW, I googled it. Apparently my Google-fu is better than yours ;) (But I do applaud your excellent memory, or ele I wouldn’t be able to find it).
And keep up with the great posts. I’m a daly reader of this blog.
Miguel
[rhetorical pose] We shouldn’t balance the risks and opportunities of AI. Enthusiasts for AI are biased. They under estimate the difficulties. They would not be so enthusiastic if they grasped how disappointing progress is likely to be. Detractors of AI are also biased. They under estimate the difficulties too. You will have a hard time convincing them of the difficulties, because you would be trying to pursuade them that they had been frightened of shadows.
So there are few opportunities which are likely to be altogether lost if we hang back through unnecessary fear. [/rhetorical]
Well, I happen to believe the two paragraphs above, but distinct from the question of whether I am right or not is the question of whether the phrase “We need to balance the risks and opportunities of AI.” means something or whether it is merely an applause light.
I think it is trivially true that we need to balance the actual risks and actual opportunities of AI. There is room for disagreement about whether we need to balance the perceived risks and perceived opportunities. If perceptions are accurate we should, but there is scope to say, for example, that the common perception is wrong and a rogue AI will in fact be quite stupid and easily unplugged. This opens the way to a decoding of language in which
o We need to balance the risks and opportunities of AI.
is the position that we are assessing the risks and opportunities correctly and
o We shouldn’t balance the risks and opportunities of AI.
is the position that we are assessing the risks and opportunities incorrectly and should follow a different path from that indicated by our inaccurate assessments. Such a position needs fleshing out with a rival account of the risks and opportunities.
One question that I dwell on is “how do intelligent and well-intention persons fall to quarrelling?”. The idea of an Applause Light is illuminating, but I think it is also quite tangled. There is the ambiguity between whether a phrase is an Applause Light or a Policy Proposal. I suspect that the core problem is that it is awfully tempting to exploit this ambiguity rhetorically, deliberating coding ones policy proposals in language that also functions as an Applause Light so that they come across as obviously correct.
The fun starts when one does this subconsciously and some-one else thinks it is deliberate and takes offence. Once this happens there is little chance of discovering the actual disaggreement (which might be about the accuracy of risk assessments) for the conversation will be derailed into meta-conversations about empty phrases and rhetoric.
I don’t get that at all. If “We shouldn’t balance the risks and opportunities of AI” means they are being assessed incorrectly, isn’t that a part of balancing the risks and opportunities of AI? I don’t see how you can get that out of the statement. If they are being done incorrectly, then in the discussion of the risks and opportunities you say “No, you’re doing it wrong, you need to look at it like this blah blah blah”.
When you say “We shouldn’t balance the risks and opportunities of AI” it means to stop making an assessment altogether. It says nothing about continuing to go forward with the project or not. It doesn’t say “Stop the project! This is all wrong!” That would fall under balancing the risks and opportunities—an assessment that came against AI.
That’s foolishness, which is why no one would ever utter the phrase in the first place. That makes the prior phrase an applause phrase, because it is obvious to anyone involved that such an assessment is necessary. You’re only saying it because you know people will nod their head in agreement and possibly clap.
It would make sense in the context of a strong bias toward a specific outcome, e.g. religious indignation toward an idea.
A person believing that thinking machines are an abomination would tell you to stop assessing and forget the whole idea. A person believing that AI is the only thing that could possibly rescue us from imminent catastrophe might well tell you to stop analyzing the risks and get on with building the AI before it’s too late.
Either position would have a substantive position that you don’t need to balance the risks and opportunities any further, without claiming that you have some error in your assessment.
Yet building an AI that eventually destroys all mankind, even after it averts this particular looming catastrophe, could easily be the worse choice. Does the catastrophe we need AI for outweigh the potential dangers of a poorly built AI?
It must still be considered. You may not have time to consider it thoroughly (as time is now a factor to consider), and that must be part of your assessment, but you still have to weigh the new risks against the potential reward.
Same with the abomination. Upon what basis is it an abomination? What are the consequences if we create the abomination? Do we spend a few extra years in purgatory, or do we burn in hell for all eternity?
It still must be considered. A few years in purgatory for a creation that saves mankind from the invading squid monsters may very much be worth doing.
Consider the atomic bomb before the first live tests. There were real concerns that splitting the atom could create an unstoppable chain of events which would set the very air on fire, destroying the whole world in that single moment. I can’t really imagine a scenario that is more dire, and more strongly argues for the ceasing of all argument.
Yet they did the math anyway, considered the risks (tiny chance of blowing up the world) vs the reward (ending the war that is guaranteed to kill millions more people), and decided it was worth it to continue.
I still see no rational case for ever halting argument, except in the case of time for assessment simply running out (if you don’t act before X, the world blows up—obviously you must finish your assessment before X or it was all pointless). You may weigh the risks vs the opportunities and decide the risks are too great, and decide not to continue. However, you can not rationally cease all argument without consideration because of a particularly strong or dire argument. To do so is irrational.
Of course you can cease argument without consideration—if you deem the risks of continuing consideration to outweigh the benefits of weighing them. For instance, if you have 1 minute to try something that would save your life, and you require at least 5 minutes to properly assess anything further, you generally can’t afford to weigh whether the idea would result in a worse situation somehow—beyond whatever assessment you have already made. At that point, the time for assessment is over.
For the most part, however, I agree with your point. I did not argue that one can rationally disagree with the statement “We need to balance the risks and opportunities of AI”; just that they can sincerely say it, and even argue for it. This was a response to you saying that “no one would ever utter the phrase in the first place”. This just strikes me as false.
Never underestimate the power of human stupidity ;)
You’re right, in that regard I was certainly mistaken.
Upvoted for the “oops” moment.
That was kinda hilarious. I like your reversal test to detect content-free tautologies. Since I am working right now on a piece of AI-political-fiction (involving voting rights for artificial agents and questions that raises), I was thrown for a moment, but then tuned in to what YOU were talking about.
The ‘Yes, Minister’ and ‘Yes, Prime Minister’ series is full of extended pieces of such content-free dialog.
More seriously though, this is a bit of a strawman attack on the word ‘democracy’ being used as decoration/group dynamics cueing. You kinda blind-sided this guy, and I suspect he’d have a better answer if he had time to think. There is SOME content even to such a woolly-headed sentiment. Any large group (including large research teams) has conflict, and there is a spectrum of conflict resolution ranging from dictatorial imposition to democracy through to consensus.
Whether or not the formal scaffolding is present, an activity as complex as research CANNOT work unless the conflict resolution mechanisms are closer to the democracy/consensus end of the spectrum. Dictators can whip people’s muscles into obedience, maybe even their lower-end skills (“do this arithmetic or DIE!”), but when you want to engage the creativity of a gang of PhDs, it is not going to work until there is a mechanism for their dissent to be heard and addressed. This means making the group itself representative (the ‘multinational’ part) automatically brings in the spirit if not the form of democratic discourses. So yes, if there are autocentric cultural biases today’s AI researchers bring to the game, making the funding and execution multinational would help. Having worked on AI research as an intern in India 12 years ago, and working today in related fields here in the US, I can’t say I see any such biases in this particular field, but perhaps in other fields, making up multinational, internationally-funded research teams would actually help.
On the flip side, you can have all the mechanisms and still allow dictatorial intent to prevail. My modest take on ruining democratic meetings run on Robert’s Rules:
The 15 laws of Meeting Power
C’mon, Eliezar, be fair: identify who the speaker was that you “probed” in this way, so that people can find the recordings of the talk and exchange at singinst.org to decide for themselves how it went.
As you have it above, aside from the paraphrasing, you omit a couple of important parts of my replies. With regards to the Reason/Hollywood comparison, I go on to say:
“That is, they’re both caricatures, and neither one is terribly plausible or complete. There would be some critical benefits to the messy process of the first scenario, and some important drawbacks to the second.”
With regards to the “I don’t know,” I then say:
“This is a point I’ve tried to make a couple of times here: this is not a solved problem, but it’s an important problem, and we need to figure out how to address it.”
I certainly did not talk about democracy with any intent of it serving as “applause lights” for my talk—in fact, given the audience, I expected a semi-hostile response, given my argument against the kind of “rebel nerd” heroism self-image a lot of the AGI community seems to have.
BTW, if anyone wants to go to singinst.org and download the audio, you’ll note that the actual event did not occur the exact way I remembered it, which should surprise no one here who knows anything about human memory. In particular, Cascio spontaneously provided the Genome Project example, rather than needing to be asked for it.
Generally, the reason I avoid identifying the characters in my examples is that it feels to me like I’m dumping all the sins of humankind upon their undeserving heads—I’m presenting one error, out of context, as exemplar for all the errors of this kind that have ever been committed, and showing none of the good qualities of the speaker—it would be like caricaturing them, if I called them by name.
That said, the reason why I picked this example is that, in fact, I was thinking of Orwell’s “Politics and the English Language” while writing this post. And as Orwell said:
In the case of a word like democracy, not only is there no agreed definition, but the attempt to make one is resisted from all sides. It is almost universally felt that when we call a country democratic we are praising it: consequently the defenders of every kind of regime claim that it is a democracy, and fear that they might have to stop using that word if it were tied down to any one meaning.
If you simply issue a call for “democracy”, why, no one can disagree with that—it would be like disagreeing with a call for apple pie. As soon as you propose a specific mechanism of democracy, whether it is Congress passing a law, or an AI polling people by phone, or government funding of a large research project whose final authority belongs to an appointed committee of eminent scientists, et cetera, people can disagree with that, because they can actually visualize the probable consequences.
So there is a tremendous motive to avoid criticism, to keep to the safely vague areas where people will applaud you, and not to make the concrete proposals where people might—gasp! - disagree.
Now I do not accuse you too much of this, because you did say “Genome Project” when challenged instead of squirting out an immense cloud of ink. But it is why I challenged you to define “democracy”. I think that the real value in these discussions comes from people willing to make concrete proposals and expose themselves to criticism.
Really bad example...
My impression is that democracy is seeing a sharp uptick in attacks from elites and intellectuals. There are many who now believe, e.g., that the US should be more like China (see: the success of Trump).
As the speaker noted, he expected his speech to be controversial in that crowd, and in a way, it was, as evidenced by this blog post :)
Hum.
When I hear the sort of thing you would call “applause lights”, I don’t always think of that as an obvious fact that everyone in their right mind would agree on. Rather, I get the impression the speaker is implying that someone they strongly disagree with does believe this obvious fact is not true, or that this ridiculous notion is.
If for example I hear someone say “we shouldn’t be hugging criminals, we should be locking them up”, I interpret that as a very one-sided opposition to a grossly misrepresented opponent who goes a bit easier on convicts. Of course this person wouldn’t literally believe the reverse that “we should be hugging criminals instead of locking them up”, but she might believe something that a bigot could paraphrase as such with a straight face.
I think this is also the reason why the speaker’s supporters applaud to statements like that—it implies the issue is very simple and clear-cut, only one side (ours) is remotely sensible, and you’d have to be insane to disagree. One-sidedness feels good. Very blatant one-sidedness feels even better.
(Excuse me if this has been said already.)
I haven’t seen it laid out so clearly anywhere.
The only thing I’d add is that it’s very easy to fall into that error reflexively. It isn’t generally a matter of conscious strategy.
Hence, an applause light is a form of strawman argumentation?
That sounds about right actually.
It can be used for that, at least.
(Hi everyone; this is my first time posting here.)
If someone delivered that 100%-applause-light paragraph to me in a speech, my first impulse would be to interpret it as an honest attempt to remind the audience of obvious but not necessarily currently-in-context ideas. For example, this statement from the middle:
“To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm.”
Taken literally as a set of assertions, this really is quite empty of novel or unexpected content. However, directed at an audience of humans, aware of but still vulnerable to cognitive bias, the statement above implies another statement which is more useful: “We should be careful to not act like who, despite intending not to, panicked rather than thinking productively. We should also be careful to not act like whose enthusiasm overwhelmed their necessary sense of caution, even though they knew the value of that caution.”
People who agree with the part of the 1st virtue that says “A burning itch to know is higher than a solemn vow to pursue truth” may still sometimes need to be reminded to check themselves and make sure they’re doing the former rather than the latter.
This sounds similar to the idea of a “motherhood statement” as defined here.
That second definition applies to most depictions of transhumanism in fiction. It’s the rare author who is bold enough to say, “The implants that we put in our brains? Yeah, they actually make us better.”
Pretty much all the fiction I read in which brain implants are mentioned at all treat them as improvements.
Really? Got any examples?
I’ve read some in which the transhuman technologies were ambiguous (had upsides and downsides), but I can’t think of any where it was just better, the way that actual technologies often are—would any of us willingly go back to the days before electricity and running water?
Having upsides and downsides isn’t the same thing as being ambiguous. Running water and electricity do have downsides–namely, depletion of water tables due to overuse, and pollution, resource depletion, and possibly global warming due in part to the efforts required to make electricity...But I wouldn’t say that either technology is ambiguous. The advantages pretty clearly outweigh the disadvantages, which are avoidable with some thought and creativity.
Well, they’re hardly common, but anarcho-primitivists do exist.
Most of Peter Hamilton’s stuff comes to mind, for example. Implants are just another technology, treated no differently than guns or cars. The Greg Mandel books have a few characters who do end up with implants that they would prefer not to have, but they’re the exceptions.
I find that a little irritating—for people supposedly open to new ideas, science fiction authors sure seem fearful and/or disapproving of future technology.
Part of me thinks that that’s encoded into the metaphorical DNA of the SF genre (or one branch of it) at a very basic level. It’s been conventional for a while to think of SF as Enlightenment and the rest of spec-fic as Romantic, but the history of the genre’s actually more complicated than that; Mary Shelley, for example, definitely fell on the Romantic side of the fence, and later writers haven’t exactly been shy about following her lead. The treading-in-God’s-domain motif is a powerful one, and it’s the bedrock that an awful lot of SF is built on.
Obligatory Link
Oy, now that you’ve said it, I hear speeches like that at the end all the time. Whole discussion between opposing sides even. Perhaps that’s why I haven’t been able to stand cable news for a while now?
When I first read this, I imagined a favorite politician (I won’t mention who) giving this mock speech.
To my embarrassment, I found myself nodding in completely genuine enthusiasm. This guy clearly knows what he’s talking about!
(This in turn made me consider just how much of this politician’s speeches was similarly composed. I came to the conclusion that quite a significant amount of it was)
...Nobody ever told me cognitive bias would be this annoying!
Upvoted because I endorse the willingness to notice one’s own biases.
So, next question, if you’re willing: what are three things you could do to reduce the degree to which this sort of empty rhetoric leads you to endorse the speaker?
TheOtherDave, that is a very constructive approach :)
I am already prone to requiring policy specifics from politicians and being dissatisfied with vague points. But one thing I (and many others) do have is a tendency to note, when hearing a few specifics in a sea of “general direction” applause cues, is that my own preference for solutions is compatible with the speech; and from compatibility, I get hope that they would implement it—despite a lack of evidence that they’re even aware of such a solution, much less want to implement it. So this is something to be cautious of and to note mid-speech.
I could go further and try to strike from mental record anything that isn’t specifics, making a point-by-point list of substantive statements. An easy way to do this is ask “is anyone really considering doing otherwise? No? Then it doesn’t count. Yes? Then why are they?” This method might not always be wise—motivations and beliefs are also important in trying to predict a politician’s future choices they did not yet address, and the speech can pronounce those. However it would be a good mental exercise when trying to evaluate positions on a specific policy question.
Lastly, try to separate emotional jargon from actual policy. If your politician says we “need to be prepared for the 21st century”, recognize the fuzzy excitement that this statement gives you and squash it—it’s caused by the phrase “21st century” being linked in your mind with progress and technology. Wait until that politician says they’re going to specifically invest in technological literacy of 8th graders before you give it any significance, and treat it as suspect until then. (This is very similar to the first thing I suggested, except it focuses on recognizing an immediately triggered emotion in response to a phrase, rather than your own mind building scenarios which then in turn excite you).
I’ll try to remember all that for the next speech I hear :P
I definitely endorse tracking specific proposals/substantive assertions, and explicitly labeling vague or empty assertions that nevertheless elicit positive feelings or invite you to project your own preferences onto the speaker.
I definitely endorse asking the “is anyone really considering doing otherwise, and why?” question.
Something I also find useful is explicitly labeling implied affiliations.
E.g., consider the difference between “we need to prepare our children with the tools they need to be leaders in the 21st century,” versus “we need to instill our children with the values they need to make the right choices in the 21st century.” They are both empty statements—I mean, who would ever claim otherwise? -- but in the U.S. today the former signals affiliation with teachers and thereby implies support for public schools, education funding, etc., while the latter signals something I understand less clearly.
And those in turn signal alliances with major political parties, because it’s understood by most U.S. voters that party A is more closely tied to education and party B to values.
In fact, even if the statement includes a specific proposal, it is often worth labeling the implied affiliation.
It’s interesting; with the connotations and associations in our discourse, I can actually make some predictions about planned policies from those two supposedly “empty” statements.
The former is probably going to spend more money on math and science education.
The latter is probably going to fund “faith-based initiatives” or something similarly silly and religious (but I repeat myself), because “values” in American politics is almost always code for “conservative Evangelical Christianity”.
So does this mean that they really aren’t empty at all?
Well, yes, I chose those statements precisely because of their connotative affiliations.
As for whether they’re really empty… (shrug).
In ordinary conversation I would consider “I like likable things!” an empty statement, but of course it conveys an enormous amount of information: that I am capable of constructing a grammatical English sentence, for example, which the Vast majority of equivalent-mass aggregations of particles in the universe are not. I can use a different term to describe that category of statement if this one is too ambiguous.
“Applause Light” is a wonderful name for that tactic; it’s funny, catchy, and makes the problem with that tactic intuitively obvious. That term should be further proliferated throughout the internet if it hasn’t already been. Adding that meme to the average internet goer’s repertoire could have wonderful side-effects on the support decisions of people in meat space everywhere.
That applause-light speech at the end just needs some variation, and I’m pretty sure it would fly. I’d replace about half of the “we should” with something else, like “it is important that we”, and “it would be dangerous to neglect” and so on, because right now it’s so repetitive that surely a lot of people would notice and realize what’s being done.
Or maybe i’m yet again overestimating my fellow human beings, as past experience says I am prone to do...
I am tempted to give a talk sometime that consists of nothing but applause lights
http://www.youtube.com/watch?v=pxMqSdgB-uA (appropriately titled “Unthink”).
You got him on nice Socratic question. Well, a good question seeks good idea and eliminate inane idea. Nice
There might be one another case: in casual conversation, something that looks like an applause light, could be just expression of recent insights of a particular person on the subject. Like, he just yesterday deduced (based on some fragments of rational texts on the web) that we should balance risks and opportunities of this. Or, maybe the audience level on the subject is so low that even the applause-like statements do convey some information.
“Let’s do everything right.”
Yep. Standard political speech.
I don’t think these statements are entirely vacuous. Even when their content is little more than a tautology, their actual meaning is something else entirely, at least in politics; they represent that the speaker is aware of the jargon, willing to use it, essentially moderate/”pragmatic” and prone to maintaining the status quo.
I couldn’t resist adding another link as an example of a speech that seems to consist almost entirely of applause lights. This one is vintage Peter Sellers.
http://www.youtube.com/watch?v=GxBtGuu9BVE
Applause Lights also have more sinister, dark artsy application: they can be used to bait people into agreeing with seemingly trivial propositions, which nevertheless cause the target to modify their self image, rendering them more likely to agree with less trivial propositions in the future. For example, Cialdini’s Influence reports on a study that found that households that had been visited by a volunteer collecting signatures in favor of the vague statement “keep California beautiful” (without ever specifying how this was to be accomplished) were much more likely to agree to prominently display a large, ugly sign reading “prevent drunk driving” on their yards than households that hadn’t been so visited.
Why limit it to bounded rationalists?
Oh, my, the unintentional humor of that speaker’s comment. There’s an entire book written on how groups resolved their conflicts in the Human Genome Project, The Genome War: They didn’t. The outcome was a horrific case study of how science really “works” today.
You’ll often see feminists on the Internet pointing out how just about everyone seems to be in favor of “gender equality” and yet hardly anyone of either gender self-identifies as feminist anymore, even though gender equality is what feminism claims to be all about.
This article explains that disconnect. “Equality” is an applause light. It’s something we can all agree is great in the abstract, but as soon as someone starts talking specifics, the applause thins and we all go back to being polarized again because everyone’s idea of what equality, especially social and economic equality, actually entails is different.
“I am here to propose to you today that we need to balance the risks and opportunities of advanced Artificial Intelligence...”
Seven years later, this open letter was signed by leaders of the field. It’s amusing how similar it is to the above speech, especially considering how it actually marked a major milestone in the advancement of the field of AI safety.
In this old post, Eliezer is being insufficiently charitable and not steelmanning.
It is possible that I know that X can do A, but I don’t know how X can do A. “Look at X and do A similarly to that” may be a reasonable response when I am asked how to do A (or when I am told that A is just impossible). It may fail in some cases (such as when I am incorrect in my assumption that X and the current situation are similar enough), but it isn’t devoid of content to say that, and is more than an applause light.
Sounds like https://en.wikipedia.org/wiki/Floating_signifier
Hi, i made this german translation if anyone is interested:
Applaus-Lichter
Pseudo-Überzeugungen
Von Eliezer_Yudkowsky
11. September 2007
Auf dem Singularity Summit 2007 rief einer der Redner zu einer demokratischen, multinationalen Entwicklung der KI auf. Also trat ich ans Mikrofon und fragte:
Angenommen, eine Gruppe demokratischer Republiken bildet ein Konsortium zur Entwicklung der KI, und während des Prozesses gibt es viel politisches Taktieren—einige Interessengruppen haben einen ungewöhnlich großen Einfluss, andere werden über den Tisch gezogen—mit anderen Worten, das Ergebnis sieht aus wie die Produkte moderner Demokratien. Alternatives Szenario: Angenommen, eine Gruppe von Rebellen-Nerds entwickelt eine KI in ihrem Keller und weist der KI an, mit jedem auf der Welt eine Umfrage zu machen (dafür bekommt jeder der noch keines hat ein Mobiltelefon) - und dann das zu machen, was die Mehrheit wünscht. Welche von diesen Optionen ist Ihrer Meinung nach “demokratischer”, und würden Sie sich bei beiden sicher fühlen?
Ich wollte herausfinden, ob er an die pragmatische Angemessenheit des demokratischen politischen Prozesses glaubte oder ob er an die moralische Rechtmäßigkeit der Stimmabgabe glaubte. Der Sprecher antwortete:
Das erste Szenario klingt wie ein Leitartikel im Reason-Magazin und das zweite klingt wie ein Hollywood-Plot.
Verwirrt fragte ich:
An was für einen demokratischen Prozess haben Sie gedacht?
Der Sprecher antwortete:
So etwas wie das Human Genome Project—das war ein international gesponsertes Forschungsprojekt.
Ich habe gefragt:
Wie würden verschiedene Interessengruppen ihre Konflikte in einer Struktur wie dem Human Genome Project lösen?
Und der Sprecher sagte:
Ich weiß es nicht.
Dieser Austausch erinnert mich an ein Zitat (das ich bei Google nicht gefunden habe, aber das von Jeff Gray und Miguel gefunden wurde) von irgendeinem Diktator, der gefragt wurde, ob er beabsichtige, seinen Bananen-Republik in Richtung Demokratie zu bewegen:
Wir glauben, dass wir uns bereits in einem demokratischen System befinden. Einige Faktoren fehlen noch, wie die freie Meinungsäußerung.
Der Kern einer Demokratie ist der spezifische Mechanismus, der politische Konflikte löst. Wenn alle Gruppen die gleiche bevorzugte Politik hätten, wäre keine Demokratie erforderlich—wir würden automatisch zusammenarbeiten. Der Konfliktlösungsprozess kann eine direkte Stimmenmehrheit oder eine gewählte Legislative oder sogar ein wählersensitives Verhalten einer KI sein, aber es muss jedoch etwas sein. Was bedeutet es, eine “demokratische” Lösung zu fordern, wenn Sie keinen Konfliktlösungsmechanismus im Sinn haben?
Ich denke, es bedeutet, dass das Wort “Demokratie” gesagt wurde, und dass das Publikum Beifall klatschen soll. Es ist nicht so sehr eine aussagekräftige Aussage, als ein Äquivalent des “Applause” -Lichts, das einem Studiopublikum sagt, wann es klatschen soll.
Dieser Fall ist nur insofern bemerkenswert, als dass ich das Applaus-Licht mit einem politischen Vorschlag verwechselt habe, mit anschließender allgemeiner Peinlichkeit. Die meisten Applaus-Lichter sind offensichtlicher und können durch einen einfachen Umkehrtest erkannt werden. Angenommen, jemand sagt:
Wir müssen die Risiken und Chancen der KI ausbalancieren.
Wenn Sie diese Aussage umkehren, erhalten Sie:
Wir sollten die Risiken und Chancen der KI nicht ausbalancieren.
Da die Umkehrung abnormal klingt, ist die nicht umgekehrte Aussage wahrscheinlich normal, was bedeutet, dass keine neuen Informationen übermittelt werden. Es gibt viele legitime Gründe, um einen Satz auszusprechen, der isoliert nicht aussagekräftig wäre. “Wir müssen die Risiken und Chancen der KI ausbalancieren” kann ein Diskussionsthema einführen; es kann die Bedeutung eines spezifischen Vorschlags für die Abwägung betonen; es kann einen unausgewogenen Vorschlag kritisieren. Das Verknüpfen mit einer normalen Behauptung kann einem eingeschränkten Rationalisten neue Informationen vermitteln—die Verbindung selbst ist möglicherweise nicht offensichtlich. Wenn aber keine Besonderheiten folgen, ist der Satz wahrscheinlich ein Applaus-Licht.
Ich bin versucht, einmal einen Vortrag zu halten, der nur aus Applaus-Lichtern besteht, und zu sehen, wie lange es dauert, bis das Publikum anfängt zu lachen:
Ich möchte Ihnen heute vorschlagen, die Risiken und Chancen fortschrittlicher künstlicher Intelligenz in Einklang zu bringen. Wir sollten die Risiken vermeiden und, soweit möglich, die Chancen wahrnehmen. Wir sollten uns nicht unnötig mit unnötigen Gefahren auseinandersetzen. Um diese Ziele zu erreichen, müssen wir vernünftig und rational planen. Wir sollten nicht in Furcht und Panik handeln oder der Technophobie nachgeben. aber wir sollten auch nicht mit blinder Begeisterung handeln. Wir sollten die Interessen aller Parteien respektieren, die an der Singularität beteiligt sind. Wir müssen sicherstellen, dass die Vorteile fortschrittlicher Technologien so vielen Menschen wie möglich zur Verfügung stehen, anstatt auf einige beschränkt zu sein. Wir müssen versuchen, gewalttätige Konflikte mit diesen Technologien so weit wie möglich zu vermeiden. und wir müssen verhindern, dass massive zerstörerische Fähigkeiten in die Hände von Individuen fallen. Wir sollten diese Fragen vorher durchdenken, nicht danach, es ist zu spät, um etwas dagegen zu unternehmen …
I’m reminded of your Tom Riddle a bit heh.
I think “a speech that consists of nothing but applause lights” pretty much applies to 99% of political discourse these days and instead being amused at how long it takes the audience to realize you’d be embittered at how seriously everyone took the whole exercise. Maybe I have some bias to sort out but I think the actual content of what is being said often matters very little to most people, as long as you hit the right buzzwords and look convincing/confident.
I like those reversals tests. They are not only useful, but also quite hilarious.
“do whatever the majority says” is not democracy.
True democracy finds the solution that maximizes the utility of all the voters, not maximizing the utility of half while completely ignoring the other half.
As do true communism. Has there ever been such a democracy? How did it found out the utility for all its voters? Sounds to me like a “no true Scotsman” encompassing all known government systems. I think you should modify your statement to something like “A democracy should try and protect the interests of its minorities, as well as those of the majority.”
I think it’s possible that there’s another purpose to these kinds of statements. When someone says, “We must acknowledge the potential risks and benefits of AGI,” they’re signalling—at least in principle—that they’re aware that there *are* both risks and benefits to AGI.
So its purpose is in some cases as a signal to listeners that the speaker has avoided ideological possession at least long enough to acknowledge the existence of factors on more than one side of an argument.
It’s hard to judge this particular case without context, but such sentences can be valid if they convey a general direction a person wants something to move on in a situation where they can’t or shouldn’t be overspecific, for example if they don’t know much about the specific subject, or if they want to remain on topic during a talk about a particular issue.
For example, I could say “it’s time someone developed a machine that is able to fetch things around the house and bring them to us”. It doesn’t mean I know anything about engineering or about how this machine would operate, just that I think it would be a good thing.
In the same way, the speaker might just have wanted to say that they believe that it would be good if AI development went in two directions: 1-Multinational 2-Democratic. That did not involve them claiming they were an expert in developing democratic frameworks for international decision making. He was just expressing that it should move in the direction of those ideals, maybe because he liked the outcome of other projects who shared them, like the Human Genome Project.
Back in the day I would’ve agreed and thought that indeed the last paragraph was prime example of political speech with nothing inside it. After recent years in politics, I wouldn’t be suprised to see leaders in certain countries make a very different speech about AI. So perhaps it is indeed useful to have these kinds of speeches, just to signal that there is still reason in this world.
When you get down to it, all politics is about conflict resolution. That’s not particular to democracy.
Democracy can be viewed as a government in which policy decisions are intended to reflect the will of the people, as opposed to, for example, the will of the nobility, or a single ruler. When people say that a set of decisions should be made democratically, they mean that the conflict resolution mechanism should be such that the decisions made are reflective of the will of the people.
I think the speaker advocating for a democratic, multinational push for AGI was saying A. that we need to push for AGI and B. that if we do so, our decisions about it should reflect the will of the people. This leaves the particular conflict resolution mechanism an open question but constrains it to the set of mechanisms that are intended to reflect the will of the people.
It’s not particularly surprising that this speaker wouldn’t have an opinion on which democratic conflict resolution mechanism to use. I imagine that sort of thing is usually left to political scientists.
The speaker’s statement is also non-obvious. It may be clear that such a push should be democratic, but it’s not at all clear that it would be. The speaker is advocating for making sure that the push happens, and that it’s democratic, as opposed to pushing for the AGI movement whilst leaving the conflict resolution entirely up to others who may not have the interests of the people at heart.
There are many different implementations of democracy, but they are all very different from oligarchies. It is meaningful to say that a set of decisions should be made democratically. Your criticism of this speaker is analogous to someone saying, “We should believe true things, not things that make us feel good,” and you responding, “You’d better know how to find true beliefs or you’re just virtue signaling.” It is both poor manners and invalid criticism.
That is not to say that I disagree with your overall point; I just don’t think this person’s statement was an example of it. Perhaps, if I’d been there, there would have been some noncommunicable social cues that would lead me to your conclusion, but the text alone does not.
Yes, I would do a modern democracy, because of one fact that is shown in stark relief by the Ukraine War: Democracy isn’t perfect, but is usually immune to the unilateralist’s curse and is better at governance than any other system we’ve tried.
To quote Churchill: “Democracy is the worst form of government, except for all the other systems we’ve tried.
Claude Opus is, unsurprisingly, excellent at writing these:
’My fellow citizens, we are gathered here today to address the critical issue of ensuring the stability and fairness of our financial markets. It is essential that we implement prudent regulations to protect consumers, investors, and indeed our entire economy. We must strive to prevent the reckless speculation and unchecked greed that can lead to devastating financial crises.
At the same time, we should not unnecessarily stifle the entrepreneurial spirit and innovation that are the engines of economic growth. A balanced, common-sense approach to financial regulation is required. We must bring all stakeholders to the table—industry, government, and citizen groups—to collaboratively develop a regulatory framework that serves the interests of all.
Transparency and accountability must be the watchwords as we go forward. Financial markets work best when participants have confidence that the system is fair and abuses will not be tolerated. Regulators should be empowered to shine a bright light into the darker corners of the financial world.
Our goal must be a financial system that is resilient in the face of shocks, yet flexible enough to power economic growth and shared prosperity for all. This is no easy task, but it is a challenge we must meet. The decisions we make today will shape our economy for generations to come. Let us seize this moment and build a sounder foundation for our financial future.′