Beware Trivial Inconveniences
The Great Firewall of China. A massive system of centralized censorship purging the Chinese version of the Internet of all potentially subversive content. Generally agreed to be a great technical achievement and political success even by the vast majority of people who find it morally abhorrent.
I spent a few days in China. I got around it at the Internet cafe by using a free online proxy. Actual Chinese people have dozens of ways of getting around it with a minimum of technical knowledge or just the ability to read some instructions.
The Chinese government isn’t losing any sleep over this (although they also don’t lose any sleep over murdering political dissidents, so maybe they’re just very sound sleepers). Their theory is that by making it a little inconvenient and time-consuming to view subversive sites, they will discourage casual exploration. No one will bother to circumvent it unless they already seriously distrust the Chinese government and are specifically looking for foreign websites, and these people probably know what the foreign websites are going to say anyway.
Think about this for a second. The human longing for freedom of information is a terrible and wonderful thing. It delineates a pivotal difference between mental emancipation and slavery. It has launched protests, rebellions, and revolutions. Thousands have devoted their lives to it, thousands of others have even died for it. And it can be stopped dead in its tracks by requiring people to search for “how to set up proxy” before viewing their anti-government website.
I was reminded of this recently by Eliezer’s Less Wrong Progress Report. He mentioned how surprised he was that so many people were posting so much stuff on Less Wrong, when very few people had ever taken advantage of Overcoming Bias’ policy of accepting contributions if you emailed them to a moderator and the moderator approved. Apparently all us folk brimming with ideas for posts didn’t want to deal with the aggravation.
Okay, in my case at least it was a bit more than that. There’s a sense of going out on a limb and drawing attention to yourself, of arrogantly claiming some sort of equivalence to Robin Hanson and Eliezer Yudkowsky. But it’s still interesting that this potential embarrassment and awkwardness was enough to keep the several dozen people who have blogged on here so far from sending that “I have something I’d like to post...” email.
Companies frequently offer “free rebates”. For example, an $800 television with a $200 rebate. There are a few reasons companies like rebates, but one is that you’ll be attracted to the television because it appears to have a net cost only $600, but then filling out the paperwork to get the rebate is too inconvenient and you won’t get around to it. This is basically a free $200 for filling out an annoying form, but companies can predict that customers will continually fail to complete it. This might make some sense if you’re a high-powered lawyer or someone else whose time is extremely valuable, but most of us have absolutely no excuse.
One last example: It’s become a truism that people spend more when they use credit cards than when they use money. This particular truism happens to be true: in a study by Prelec and Simester1, auction participants bid twice as much for the same prize when using credit than when using cash. The trivial step of getting the money and handing it over has a major inhibitory effect on your spending habits.
I don’t know of any unifying psychological theory that explains our problem with trivial inconveniences. It seems to have something to do with loss aversion, and with the brain’s general use of emotion-based hacks instead of serious cost-benefit analysis. It might be linked to akrasia; for example, you might not have enough willpower to go ahead with the unpleasant action of filling in a rebate form, and your brain may assign it low priority because it’s hard to imagine the connection between the action and the reward.
But these trivial inconveniences have major policy implications. Countries like China that want to oppress their citizens are already using “soft” oppression to make it annoyingly difficult to access subversive information. But there are also benefits for governments that want to help their citizens.
”Soft paternalism” means a lot of things to a lot of different people. But one of the most interesting versions is the idea of “opt-out” government policies. For example, it would be nice if everyone put money into a pension scheme. Left to their own devices, many ignorant or lazy people might never get around to starting a pension, and in order to prevent these people’s financial ruin, there is strong a moral argument for a government-mandated pension scheme. But there’s also a strong libertarian argument against that idea; if someone for reasons of their own doesn’t want a pension, or wants a different kind of pension, their status as a free citizen should give them that right.
The “soft paternalist” solution is to have a government-mandated pension scheme, but allow individuals to opt-out of it after signing the appropriate amount of paperwork. Most people, the theory goes, would remain in the pension scheme, because they understand they’re better off with a pension and it was only laziness that prevented them from getting one before. And anyone who actually goes through the trouble of opting out of the government scheme would either be the sort of intelligent person who has a good reason not to want a pension, or else deserve what they get2.
This also reminds me of Robin’s IQ-gated, test-requiring would-have-been-banned store, which would discourage people from certain drugs without making it impossible for the true believers to get their hands on them. I suggest such a store be located way on the outskirts of town accessible only by a potholed road with a single traffic light that changes once per presidential administration, have a surly clerk who speaks heavily accented English, and be open between the hours of two and four on weekdays.
Footnotes
1: See Jonah Lehrer’s book How We Decide. In fact, do this anyway. It’s very good.
2: Note also the clever use of the status quo bias here.
- Working hurts less than procrastinating, we fear the twinge of starting by 2 Jan 2011 0:15 UTC; 357 points) (
- Covid 12/24: We’re F***ed, It’s Over by 24 Dec 2020 15:10 UTC; 276 points) (
- Should the forum be structured such that the drama of the day doesn’t occur on the front page? by 13 Jan 2023 11:58 UTC; 244 points) (EA Forum;
- More Dakka by 2 Dec 2017 13:10 UTC; 211 points) (
- I Converted Book I of The Sequences Into A Zoomer-Readable Format by 10 Nov 2022 2:59 UTC; 200 points) (
- Have You Tried Hiring People? by 2 Mar 2022 2:06 UTC; 185 points) (
- Reshaping the AI Industry by 29 May 2022 22:54 UTC; 147 points) (
- The Library of Scott Alexandria by 14 Sep 2015 1:38 UTC; 126 points) (
- How do you feel about LessWrong these days? [Open feedback thread] by 5 Dec 2023 20:54 UTC; 106 points) (
- Akrasia Tactics Review by 21 Feb 2010 4:25 UTC; 93 points) (
- Try more things. by 12 Jan 2014 1:25 UTC; 88 points) (
- People Will Listen by 11 Apr 2021 16:51 UTC; 85 points) (
- New Epistemics Tool: ThEAsaurus by 1 Apr 2024 15:32 UTC; 79 points) (EA Forum;
- Optimal Employment by 31 Jan 2011 12:50 UTC; 77 points) (
- The Craft & The Community—A Post-Mortem & Resurrection by 2 Nov 2017 3:45 UTC; 77 points) (
- AI Training Should Allow Opt-Out by 23 Jun 2022 1:33 UTC; 76 points) (
- Covid 1/13/22: Endgame by 13 Jan 2022 18:10 UTC; 75 points) (
- Turning the Technical Crank by 5 Apr 2016 5:36 UTC; 74 points) (
- Conflicts Between Mental Subagents: Expanding Wei Dai’s Master-Slave Model by 4 Aug 2010 9:16 UTC; 71 points) (
- Time and Effort Discounting by 7 Jul 2011 23:48 UTC; 65 points) (
- Beware Trivial Fears by 4 Feb 2014 5:40 UTC; 63 points) (
- Omicron Post #3 by 2 Dec 2021 15:10 UTC; 57 points) (
- Soft Paternalism in Parenting by 4 Jan 2014 15:52 UTC; 53 points) (
- The Assumed Intent Bias by 5 Nov 2023 16:28 UTC; 51 points) (
- Motivation: You Have to Win in the Moment by 1 Mar 2019 0:26 UTC; 49 points) (
- AI #95: o1 Joins the API by 19 Dec 2024 15:10 UTC; 49 points) (
- Celebrate Trivial Impetuses by 24 Jul 2009 22:36 UTC; 48 points) (
- Exploring the Streisand Effect by 6 Jul 2020 7:00 UTC; 46 points) (EA Forum;
- Still Not in Charge by 9 Feb 2021 16:00 UTC; 46 points) (
- Hacking Less Wrong made easy: Vagrant edition by 30 Jan 2012 18:51 UTC; 44 points) (
- Covid 3/17/22: The Rise of BA.2 by 17 Mar 2022 16:20 UTC; 44 points) (
- Why Academic Papers Are A Terrible Discussion Forum by 20 Jun 2012 18:15 UTC; 44 points) (
- Monthly Roundup #24: November 2024 by 18 Nov 2024 13:20 UTC; 43 points) (
- Key Decision Analysis—a fundamental rationality technique by 12 Jan 2020 5:59 UTC; 43 points) (
- On Systems—Living a life of zero willpower by 16 Aug 2020 16:44 UTC; 42 points) (
- The Paucity of Elites Online by 31 May 2013 1:35 UTC; 40 points) (
- Activation Costs by 25 Oct 2010 21:30 UTC; 39 points) (
- 8 Apr 2013 21:41 UTC; 39 points) 's comment on New applied rationality workshops (April, May, and July) by (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- 3 Feb 2016 15:12 UTC; 36 points) 's comment on Upcoming LW Changes by (
- Index of Yvain’s (Excellent) Articles by 30 Jun 2011 9:57 UTC; 36 points) (
- Yvain’s most important articles by 16 Aug 2015 8:27 UTC; 35 points) (
- 10 Jun 2013 10:29 UTC; 34 points) 's comment on Useful Concepts Repository by (
- I Converted Book I of The Sequences Into A Zoomer-Readable Format by 10 Nov 2022 2:59 UTC; 33 points) (EA Forum;
- EA Funds—An update from CEA by 7 Aug 2018 18:21 UTC; 33 points) (EA Forum;
- Macro, not Micro by 6 Jan 2013 5:29 UTC; 33 points) (
- Less Wrong Should Confront Wrongness Wherever it Appears by 21 Sep 2010 1:40 UTC; 32 points) (
- Towards no-math, graphical instructions for prediction markets by 4 Jan 2019 16:39 UTC; 30 points) (
- [META] Open threads (and repository threads) are underutilized. by 11 Jul 2013 17:48 UTC; 29 points) (
- 13 Aug 2013 9:10 UTC; 26 points) 's comment on Biases of Intuitive and Logical Thinkers by (
- AI #93: Happy Tuesday by 4 Dec 2024 0:30 UTC; 26 points) (
- Hammertime Day 7: Aversion Factoring by 4 Feb 2018 16:10 UTC; 26 points) (
- 23 Apr 2013 3:20 UTC; 24 points) 's comment on Compromise: Send Meta Discussions to the Unofficial LessWrong Subreddit by (
- 13 Dec 2010 21:26 UTC; 24 points) 's comment on Brainstorming: neat stuff we could do on LessWrong by (
- The Changing Face of Twitter by 29 Mar 2023 17:50 UTC; 23 points) (
- 2 Jan 2010 15:03 UTC; 23 points) 's comment on New Year’s Predictions Thread by (
- New York Less Wrong: Expansion Plans by 1 Jul 2012 1:20 UTC; 23 points) (
- 28 Jan 2023 13:22 UTC; 21 points) 's comment on Native English speaker EAs: could you please speak slower? by (EA Forum;
- 9 Jun 2014 13:38 UTC; 21 points) 's comment on Bragging Thread, June 2014 by (
- Trivial inconveniences as an antidote to akrasia by 18 May 2018 5:34 UTC; 20 points) (
- 21 Jul 2015 2:14 UTC; 20 points) 's comment on MIRI’s 2015 Summer Fundraiser! by (
- 10 Apr 2012 15:07 UTC; 19 points) 's comment on Harry Potter and the Methods of Rationality predictions by (
- LW Study Group Facebook Page by 8 Apr 2013 21:15 UTC; 19 points) (
- What’s keeping concerned capabilities gain researchers from leaving the field? by 12 May 2022 12:16 UTC; 19 points) (
- 7 Sep 2013 22:02 UTC; 18 points) 's comment on Group Rationality Diary, September 1-15 by (
- The spam must end by 29 Oct 2010 2:20 UTC; 18 points) (
- Survey: Help Us Research Coordination Problems In The Rationalist/EA Community by 6 Apr 2018 23:29 UTC; 18 points) (
- 12 Jan 2015 5:59 UTC; 17 points) 's comment on Open thread, Jan. 12 - Jan. 18, 2015 by (
- 9 Sep 2013 21:22 UTC; 17 points) 's comment on Mistakes repository by (
- [LINK] How A Lamp Took Away My Reading And A Box Brought It Back by 30 Jan 2016 16:55 UTC; 17 points) (
- 5 Jun 2022 11:35 UTC; 16 points) 's comment on Quantifying Uncertainty in GiveWell’s GiveDirectly Cost-Effectiveness Analysis by (EA Forum;
- [link] [poll] Future Progress in Artificial Intelligence by 9 Jul 2014 13:51 UTC; 16 points) (
- 26 Sep 2022 22:06 UTC; 15 points) 's comment on Ambiguity in Prediction Market Resolution is Harmful by (
- 29 Nov 2020 20:55 UTC; 14 points) 's comment on Pain is the unit of Effort by (
- graphpatch: a Python Library for Activation Patching by 5 Jun 2024 15:08 UTC; 13 points) (
- 2 Oct 2024 15:34 UTC; 13 points) 's comment on Conventional footnotes considered harmful by (
- 19 Dec 2014 16:09 UTC; 13 points) 's comment on Munchkining for Fun and Profit, Ideas, Experience, Successes, Failures by (
- 20 Jul 2013 20:51 UTC; 13 points) 's comment on Group Rationality Diary, July 16-31 by (
- 27 Nov 2016 17:37 UTC; 12 points) 's comment on A Return to Discussion by (
- 3 Aug 2011 2:51 UTC; 12 points) 's comment on Open Thread: August 2011 by (
- 2 Apr 2022 20:20 UTC; 11 points) 's comment on April 2022 Welcome & Open Thread by (
- 8 Mar 2011 18:07 UTC; 11 points) 's comment on Open Thread, March 2011 by (
- 23 Jul 2019 15:51 UTC; 11 points) 's comment on The Real Rules Have No Exceptions by (
- 8 Apr 2023 21:32 UTC; 11 points) 's comment on LW Team is adjusting moderation policy by (
- 27 Feb 2010 17:58 UTC; 11 points) 's comment on The Last Days of the Singularity Challenge by (
- 2 Aug 2010 17:27 UTC; 10 points) 's comment on Open Thread, August 2010 by (
- 8 May 2011 3:58 UTC; 10 points) 's comment on The 5-Second Level by (
- 2 Oct 2013 21:20 UTC; 10 points) 's comment on Rationality, competitiveness and akrasia by (
- Introduction to the Sequence Reruns by 19 Apr 2011 19:39 UTC; 10 points) (
- 15 Mar 2015 5:55 UTC; 10 points) 's comment on Rationality: From AI to Zombies by (
- 10 Apr 2020 17:52 UTC; 9 points) 's comment on Why I’m Not Vegan by (EA Forum;
- 6 Feb 2011 6:20 UTC; 9 points) 's comment on post proposal: Attraction and Seduction for Heterosexual Male Rationalists by (
- 28 Oct 2013 10:02 UTC; 9 points) 's comment on Open Thread, October 27 − 31, 2013 by (
- 13 Sep 2010 22:15 UTC; 9 points) 's comment on More art, less stink: Taking the PU out of PUA by (
- Accrue Nuclear Dignity Points by 7 Oct 2022 8:40 UTC; 9 points) (
- TSR #8 Operational Consistency by 3 Jan 2018 2:11 UTC; 9 points) (
- 21 Nov 2010 17:22 UTC; 9 points) 's comment on What I’ve learned from Less Wrong by (
- 15 Mar 2014 1:28 UTC; 8 points) 's comment on Channel factors by (
- 17 Feb 2010 3:46 UTC; 8 points) 's comment on Demands for Particular Proof: Appendices by (
- 17 Nov 2010 4:10 UTC; 8 points) 's comment on Theoretical “Target Audience” size of Less Wrong by (
- 30 Mar 2024 4:55 UTC; 8 points) 's comment on My Interview With Cade Metz on His Reporting About Slate Star Codex by (
- 17 Sep 2010 5:16 UTC; 8 points) 's comment on Open Thread, September, 2010-- part 2 by (
- 8 Dec 2010 21:06 UTC; 8 points) 's comment on Best career models for doing research? by (
- 3 Dec 2021 21:49 UTC; 8 points) 's comment on Morality is Scary by (
- 5 Jan 2011 22:05 UTC; 7 points) 's comment on The Neglected Virtue of Scholarship by (
- 2 Jun 2023 8:54 UTC; 7 points) 's comment on Outreach success: Intro to AI risk that has been successful by (
- 24 May 2011 0:19 UTC; 7 points) 's comment on Rationalist horoscopes: A low-hanging utility generator. by (
- 5 Dec 2020 2:14 UTC; 7 points) 's comment on Covid 12/3: Land of Confusion by (
- 2 Nov 2023 22:28 UTC; 7 points) 's comment on Propaganda or Science: A Look at Open Source AI and Bioterrorism Risk by (
- 1 May 2019 19:16 UTC; 6 points) 's comment on The Amish, and Strategic Norms around Technology by (
- 13 Jun 2010 18:15 UTC; 6 points) 's comment on Open Thread June 2010, Part 2 by (
- 18 Oct 2014 16:56 UTC; 6 points) 's comment on What false beliefs have you held and why were you wrong? by (
- 28 Apr 2015 3:47 UTC; 6 points) 's comment on Open Thread, Apr. 27 - May 3, 2015 by (
- 21 Jul 2014 14:11 UTC; 6 points) 's comment on [moderator action] Eugine_Nier is now banned for mass downvote harassment by (
- 12 May 2021 15:03 UTC; 6 points) 's comment on ryan_b’s Shortform by (
- 4 Aug 2014 15:55 UTC; 6 points) 's comment on Maybe we’re not doomed by (
- 29 Oct 2011 6:41 UTC; 6 points) 's comment on Sustainability of Human Progress by (
- 30 Jun 2012 0:25 UTC; 6 points) 's comment on Free research help, editing and article downloads for LessWrong by (
- 3 Nov 2023 22:26 UTC; 5 points) 's comment on Still no strong evidence that LLMs increase bioterrorism risk by (EA Forum;
- 3 Apr 2015 13:04 UTC; 5 points) 's comment on How has lesswrong changed your life? by (
- Using accounts as “group accounts” by 9 Mar 2018 3:44 UTC; 5 points) (
- 31 Oct 2019 14:30 UTC; 5 points) 's comment on Halloween by (
- 26 May 2015 1:12 UTC; 5 points) 's comment on Ideas to Improve LessWrong by (
- 11 Feb 2012 19:27 UTC; 5 points) 's comment on Twelve Virtues booklet printing? by (
- 6 Oct 2023 1:58 UTC; 5 points) 's comment on Stampy’s AI Safety Info soft launch by (
- 30 Jul 2016 2:26 UTC; 4 points) 's comment on Making EA groups more welcoming by (EA Forum;
- 13 May 2013 8:22 UTC; 4 points) 's comment on Post ridiculous munchkin ideas! by (
- 7 Mar 2010 16:19 UTC; 4 points) 's comment on Open Thread: March 2010 by (
- 28 Jan 2010 3:06 UTC; 4 points) 's comment on Bizarre Illusions by (
- 8 Jan 2020 5:23 UTC; 4 points) 's comment on Open & Welcome Thread—January 2020 by (
- 4 Jan 2014 14:34 UTC; 4 points) 's comment on What is the Main/Discussion distinction, and what should it be? by (
- 2 Jul 2014 20:15 UTC; 4 points) 's comment on Downvote stalkers: Driving members away from the LessWrong community? by (
- 4 May 2014 19:32 UTC; 4 points) 's comment on May Monthly Bragging Thread by (
- 28 Sep 2010 20:00 UTC; 4 points) 's comment on Anti-akrasia remote monitoring experiment by (
- 3 Jan 2014 9:37 UTC; 4 points) 's comment on A proposed inefficiency in the Bitcoin markets by (
- 8 Sep 2022 22:40 UTC; 4 points) 's comment on The ethics of reclining airplane seats by (
- 27 Jun 2010 17:13 UTC; 4 points) 's comment on Unknown knowns: Why did you choose to be monogamous? by (
- 6 Dec 2011 1:23 UTC; 4 points) 's comment on Rationality Quotes December 2011 by (
- 2 Jul 2023 7:18 UTC; 4 points) 's comment on Forum Karma: view stats and find highly-rated comments for any LW user by (
- 21 Jan 2024 20:28 UTC; 4 points) 's comment on Against Nonlinear (Thing Of Things) by (
- 15 Oct 2010 7:06 UTC; 4 points) 's comment on LW favorites by (
- 19 Apr 2016 20:59 UTC; 4 points) 's comment on Open thread, Apr. 18 - Apr. 24, 2016 by (
- 14 Sep 2010 20:46 UTC; 4 points) 's comment on Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality by (
- Lessons from Previous Social Movements About How Important it is to Avoid Controversy by 4 Mar 2022 17:47 UTC; 3 points) (EA Forum;
- 7 Dec 2016 20:16 UTC; 3 points) 's comment on Donor lotteries: demonstration and FAQ by (EA Forum;
- 10 Jun 2014 22:03 UTC; 3 points) 's comment on Rationality Quotes June 2014 by (
- 20 Jan 2015 9:20 UTC; 3 points) 's comment on Open thread, Jan. 19 - Jan. 25, 2015 by (
- 4 Jan 2011 15:47 UTC; 3 points) 's comment on Applied Optimal Philanthropy: How to Donate $100 to SIAI for Free by (
- 29 Jan 2013 15:05 UTC; 3 points) 's comment on Rationality Quotes January 2013 by (
- 21 Aug 2012 20:35 UTC; 3 points) 's comment on LessWrong could grow a lot, but we’re doing it wrong. by (
- 7 May 2011 20:27 UTC; 3 points) 's comment on The 5-Second Level by (
- 2 Apr 2010 2:49 UTC; 3 points) 's comment on Announcing the Less Wrong Sub-Reddit by (
- 25 Aug 2022 17:55 UTC; 3 points) 's comment on Your posts should be on arXiv by (
- 8 Aug 2011 15:35 UTC; 3 points) 's comment on [SEQ RERUN] My Wild and Reckless Youth by (
- 13 May 2023 22:28 UTC; 3 points) 's comment on adamzerner’s Shortform by (
- 5 Aug 2023 16:54 UTC; 3 points) 's comment on Private notes on LW? by (
- 17 Feb 2010 4:25 UTC; 3 points) 's comment on Issues, Bugs, and Requested Features by (
- 27 Oct 2012 23:04 UTC; 3 points) 's comment on PSA: Learn to code by (
- 28 Jan 2023 14:58 UTC; 2 points) 's comment on Native English speaker EAs: could you please speak slower? by (EA Forum;
- 6 Jun 2022 12:30 UTC; 2 points) 's comment on Quantifying Uncertainty in GiveWell’s GiveDirectly Cost-Effectiveness Analysis by (EA Forum;
- 11 Aug 2021 18:18 UTC; 2 points) 's comment on My Productivity Tips and Systems by (
- 4 Aug 2013 15:47 UTC; 2 points) 's comment on Rationality Quotes August 2013 by (
- 9 Jul 2011 2:12 UTC; 2 points) 's comment on Distracting wolves and real estate agents by (
- 21 Dec 2011 16:01 UTC; 2 points) 's comment on What is your rationality blind spot? by (
- 23 Mar 2024 1:20 UTC; 2 points) 's comment on D0TheMath’s Shortform by (
- 3 Sep 2012 21:28 UTC; 2 points) 's comment on Open Thread, September 1-15, 2012 by (
- 19 Mar 2011 16:42 UTC; 2 points) 's comment on Anki on Mac in 60 seconds by (
- 10 Nov 2009 17:32 UTC; 2 points) 's comment on Rationality advice from Terry Tao by (
- 28 Jul 2013 20:12 UTC; 2 points) 's comment on “Stupid” questions thread by (
- 26 Jan 2023 12:36 UTC; 2 points) 's comment on shortplav by (
- 6 Dec 2013 17:54 UTC; 2 points) 's comment on December Monthly Bragging Thread by (
- 7 Nov 2012 17:42 UTC; 2 points) 's comment on Bayes for Schizophrenics: Reasoning in Delusional Disorders by (
- 9 Jan 2011 5:10 UTC; 2 points) 's comment on A LessWrong “rationality workbook” idea by (
- 9 May 2014 15:31 UTC; 2 points) 's comment on A Dialogue On Doublethink by (
- 6 May 2021 21:35 UTC; 2 points) 's comment on Vulkanodox’s Shortform by (
- 23 Aug 2015 20:17 UTC; 2 points) 's comment on A list of apps that are useful to me. (And other phone details) by (
- Introduction to the Sequence Reruns by 19 Apr 2011 4:48 UTC; 2 points) (
- 3 Dec 2014 21:18 UTC; 2 points) 's comment on Open thread, Dec. 1 - Dec. 7, 2014 by (
- 28 Feb 2013 1:50 UTC; 2 points) 's comment on [Link] Is the Endowment Effect Real? by (
- 15 May 2009 8:07 UTC; 2 points) 's comment on Essay-Question Poll: Voting by (
- 22 Jan 2022 17:57 UTC; 2 points) 's comment on Open Thread—Jan 2022 [Vote Experiment!] by (
- 16 Sep 2022 19:43 UTC; 2 points) 's comment on Transhumanism, genetic engineering, and the biological basis of intelligence. by (
- 28 Mar 2019 6:34 UTC; 1 point) 's comment on Mental support by (EA Forum;
- 15 Dec 2016 8:33 UTC; 1 point) 's comment on Can we set up a system for international donation trading? by (EA Forum;
- 23 Sep 2010 17:30 UTC; 1 point) 's comment on (Virtual) Employment Open Thread by (
- 15 Feb 2012 9:07 UTC; 1 point) 's comment on Open Thread, February 15-29, 2012 by (
- 26 Jul 2013 1:48 UTC; 1 point) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 by (
- 4 Jul 2013 14:12 UTC; 1 point) 's comment on New HPMOR Article(s) by (
- 22 Jan 2014 16:51 UTC; 1 point) 's comment on Rationalists Are Less Credulous But Better At Taking Ideas Seriously by (
- 7 Mar 2013 21:32 UTC; 1 point) 's comment on Boring Advice Repository by (
- 31 Mar 2013 13:56 UTC; 1 point) 's comment on LW wiki spam filtering by (
- 9 Apr 2011 21:13 UTC; 1 point) 's comment on Meta: How should LW account deletion work? by (
- 19 Nov 2010 15:44 UTC; 1 point) 's comment on Theoretical “Target Audience” size of Less Wrong by (
- 6 Nov 2009 0:29 UTC; 1 point) 's comment on Open Thread: November 2009 by (
- 9 Jun 2013 22:26 UTC; 1 point) 's comment on Open Thread, June 2-15, 2013 by (
- 5 Jun 2013 7:51 UTC; 1 point) 's comment on Procedural Knowledge Gaps by (
- 19 May 2009 6:59 UTC; 1 point) 's comment on Share Your Anti-Akrasia Tricks by (
- 24 Oct 2012 17:06 UTC; 1 point) 's comment on Equality and natalism by (
- 14 May 2015 20:09 UTC; 1 point) 's comment on How to come to a rational believe about whether someone has a crush on yo by (
- 29 Mar 2013 9:41 UTC; 0 points) 's comment on Rationality Quotes March 2013 by (
- 9 Oct 2009 2:50 UTC; 0 points) 's comment on Let them eat cake: Interpersonal Problems vs Tasks by (
- 24 Nov 2014 18:28 UTC; 0 points) 's comment on Open thread, Nov. 24 - Nov. 30, 2014 by (
- 18 Aug 2014 19:08 UTC; 0 points) 's comment on Rationality Quotes August 2014 by (
- 21 Nov 2015 8:51 UTC; 0 points) 's comment on Rationality Quotes Thread November 2015 by (
- 21 Dec 2013 8:19 UTC; 0 points) 's comment on Outside the Laboratory by (
- 11 Jan 2012 4:37 UTC; 0 points) 's comment on How to Beat Procrastination by (
- 3 Apr 2013 12:31 UTC; 0 points) 's comment on Open Thread, April 1-15, 2013 by (
- 25 Sep 2010 8:19 UTC; 0 points) 's comment on Open Thread: July 2010, Part 2 by (
- 21 Aug 2016 2:12 UTC; 0 points) 's comment on Seeking Optimization of New Website “New Atheist Survival Kit,” a go-to site for newly-made atheists by (
- 26 Dec 2012 1:29 UTC; 0 points) 's comment on Gun Control: How would we know? by (
- 24 Sep 2015 18:33 UTC; 0 points) 's comment on Help me test out my Bayes Academy game by (
- 19 Feb 2010 20:38 UTC; 0 points) 's comment on Issues, Bugs, and Requested Features by (
- 26 Feb 2017 8:13 UTC; 0 points) 's comment on Stupidity as a mental illness by (
- 15 Dec 2012 21:03 UTC; 0 points) 's comment on Poll—Is endless September a threat to LW and what should be done? by (
- 26 May 2012 12:31 UTC; 0 points) 's comment on PSA: Learn to code by (
- 2 Nov 2010 3:02 UTC; -1 points) 's comment on Ben Goertzel: The Singularity Institute’s Scary Idea (and Why I Don’t Buy It) by (
- 2 Jan 2015 20:14 UTC; -2 points) 's comment on Overpaying for happiness? by (
And yet, it seems to me that those Chinese who don’t know that it’s safe to go around the government firewall may have no good way of finding out that it’s safe.
Paranoia about how if they do they will get caught may be cultivated in them. How do they know what methods the government has.
Also, they may be made to think that there is something dirty or illicit, wrong or ugly about going outside official sources.
I am reminded of how effectively government propaganda in the US works on those teenagers who least need it and how ineffectively on those who most need it.
Also, penning in sheep is a lot easier than penning in wolves.
In fact, this reminds me of the magnetic traps (Penning traps?) that are used to cool a couple of hundred atoms down to near-absolute zero. There is a potential barrier that keeps most of the atoms inside. Occasionally, one atom is jostled enough to gain enough energy to escape. This has the effect of carrying energy away from the group, cooling it as a whole.
I think the analogy is compelling. An activism that works off of a discontented fringe only serves to strengthen the current regime. To get real change, one needs to energize the populace as a whole, and often the only forces capable of such widespread influence have economic and deep cultural foundations. Both Gandhi and MLK knew this.
I think the Chinese government also knows this, but I am not sure they can exploit this in the long term.
This is a lot like evaporative cooling of group beliefs
Yes, but instead of the mechanism making the beliefs more radical in the context of the whole society, it acts to make beliefs more mainstream. Though, one could argue that a more jingoistic China would be more radical in the analogous larger context.
This is one of my, perhaps the, best justification for being mostly vegetarian rather than strictly vegetarian. (Aside 1: I probably wouldn’t phrase it quite as strongly as you. Aside 2: I look forward to commenting about something unrelated to vegetarianism).
Well, but unlike the atom-cooling example, becoming a strict vegetarian doesn’t cut off your communication with non-vegetarians.
I suppose being just mostly vegetarian might make a vegetarian lifestyle seem more approachable to others, but I’d have to see evidence to go either way on that question. Off the cuff, it also seems plausible that being a strict vegetarian would make the possibility of strict vegetarianism seem more attainable to others.
It does make it more difficult to go to the steakhouse with them, or eat over at their house.
For eating at people’s houses: usually people will have enough side-dishes that if one does not make a big deal of it, one can fill up on non-meat dishes. At worst, there’s always bread.
For going to steakhouse—yes, but at every other place, there’s usually a vegetarian option, if one tries hard enough.
It does make a good case for being an unannoying vegetarian...but being a strict-vegetarian is a useful Schelling point.
These lines of thinking seem to be a pretty big rationalization risk. Does human political behavior really act like cooling atoms? Sure, if thinking that way makes me feel good about my political choices!
I agree with this, but am confused by your criticism of the evaporative cooling metaphor. Rationalization and mechanisms for a group to become more extreme are not the same topic.
I wasn’t responding to the evaporative-cooling metaphor.
And maybe it should, at least if you’re a vegetarian for ethical reasons, you’d probably also value signalling to your social circle that they are, in your opinion, supporting sentient suffering. If the minimizing of which is the reason for you (in the impersonal sense) being a vegetarian.
As a strict vegetarian, that’s never been a problem for me. I’m pretty sure fubarisco is right.
Very insightful.
Manual moderation is a big unknown—risk aversion means that you don’t want your time spent writing to be wasted by some moderator deciding not to publish. And delay between writing and publishing is a problem too—you want feedback as soon as you wrote it while it’s still fresh in your mind.
If people thought moderators would be very friendly, and very fast, that would matter less, but it’s unusual expectation to have, even when it turns out to be true.
These are two very rational reasons why people post on LW and not OB.
Precisely. Stuff on LW is posted unless some moderator decides to remove it; it’s not brought to the main page unless some moderator approves, but it exists, can be read, and can be linked to.
Stuff on OB doesn’t exist until a moderator decides it does.
If I’ve written something that I think the moderator will either dislike or be indifferent to, there’s no point in sending it in to OB. Posting it on LW will get it seen and thought about, and if it’s sufficiently popular, the moderators may even feel pressure to give it front-page access even if they dislike it personally.
I’ve felt for a long time that the same solution should be implemented for organ donation.
(Actually, there’s a case to be made for “screw your sentimental attachment to your meat parts—we can save lives”. But soft paternalism is a start.)
Mandatory donation would really screw you over if you were trying for cryonics.
Given mandatory donation, it seems reasonable to me that opting out of it would be standard part of the paperwork involved with signing up for cryonics.
If you can opt out of it, it’s not mandatory! You could get the best of both worlds, though: vitrify your head and donate the rest of your body. The only loss is, I think, your corneas.
The process of vitrifying the head makes the rest of the body unsuitable for organ donations. If the organs are extracted first, then the large resulting leaks in the circulatory system make perfusing the brain difficult. If the organs are extracted after the brain is properly perfused, they’ve been perfused too, and with the wrong substances for the purposes of organ donation.
Oh, thank you! I didn’t realize that. Perhaps a process could be developed? For example, maybe you could chill the body rapidly to organ-donation temperatures, garrote the neck, extract the organs while maintaining head blood pressure with the garrote, then remove the head and connect perfusion apparatus to it?
It’s worse than I said, by the way. If the patient is donating kidneys and is brain dead, the cryonics people want the suspension to happen as soon as possible to minimize further brain damage. The organ donation people want the organ donation to happen when the surgical team and recipient are ready, so there will be conflict over the schedule.
In any case, the fraction of organ donors is small, and the fraction of cryonics cases is much smaller, and the two groups do not have a history of working with each other. Thus even if the procedure is technically possible, I don’t know of an individual who would be interested in developing the hybrid procedure. There’s lots of other stuff that is more important to everyone involved.
Right, but I assumed that Julian was still talking about Yvain’s idea that Mblume referred to, where the government-mandated system is not strictly “mandatory” but rather the default option from which you can opt out.
I agree with you 200%. I think a couple of countries in Europe might have that. I heard Brazil used to have it, but had to change it when stupid people got angry.
Organ donation is a tricky thing, and people don’t think rationally when confronted with the death of a loved one.
I’m from Singapore, where we’re automatically registered as organ donors and the majority of us are cremated after death, so organ donation shouldn’t really be that much of an issue.
Sadly(?), medical science has advanced to the point where we can be kept “alive” despite being brain dead, and it is from these corpses that the organs with the best chance of a successful transplant can be obtained. It’s hard to expect a family to accept organ donation when they can see that the loved one still has a heartbeat, even if the heartbeat is produced with the aid of life-support machines.
If the hospital takes a “screw you, you’re stupid and we’re taking your organs” attitude, the inevitable backlash has no winners and the law will end up changed. It took a lot of cajoling from our governmental mouthpieces to soothe public sentiment when that happened.
There are two of you?
-- New York Times, The New, Soft Paternalism
Perhaps he feels twice as strongly (by some measure) about the issue than he estimates I do?
Huh. That actually makes sense. I withdraw my objection.
(Eric S. Raymond called me a “hyperintelligent pedantic bastard” at Penguicon 2009. I was flattered.)
Kieran Healy over at CrookedTImber presents evidence that, while opt-in vs. opt-out does make a difference to whether individuals agree to donate, this doesn’t necessarily translate into differences in actual organ procurement rates, and argues that the real bottlenecks in many countries are organizational/logistical.
The apparent lesson: Don’t assume that just by removing the obvious trivial obstacles, the problem will be solved. There may be less trivial obstacles lurking in the background.
P.S. Reading off the graphs, Austria, Belgium, France, Hungary, Italy, Norway, Poland, Portugal, Spain, Sweden, and Switzerland all appear to have presumed consent.
I was just going to talk about a similar research. So imagine my delight when you mentioned this!
This was actually done! I heard this in a talk by Dan Arielly (of “Predicatably Irrational” fame), which he called “his favorite social science research ever.”
Basically, in countries in which you opt out of organ donation (I think these were some Scandinavian countries), the percentage of organ donors was really high. In countries where you “opt in” to organ donation, the percentage of organ donors was really low.
Okay, here is what a simple Google search yielded:
http://scienceblogs.com/cognitivedaily/2008/10/dan_ariely_at_davidson.php
How do you feel about allowing the sale of organs?
I fear the massive levels of abuse it could bring—the possibility that someone would commit suicide because their organs can take care of their family and they can’t, that someone’s organs could be used as collateral in a loan à la Merchant of Venice, and of course, the temptation to gain the organs of others by force..
On the other hand, I would question what the market value of various organs would stabilize at if everyone were allowed to participate. Perhaps there’d be more potential donors than participants and the prices would stabilize to a reasonable level, discouraging abuse.
Has anyone attempted an analysis on this issue?
Actually, what if it were handled through insurance? What if opting to donate decreased your health insurance premiums by an amount settled at by actuarial tables and the likelihood of your dying with usable organs etc. etc. and then your insurance company got to sell your organs when you died?
Only the last is an abuse. The preceding points were merely uses that you’re uncomfortable with.
I wish people would get this straight. Just because you’re uncomfortable or disapproving of a particular utilization of a right or ability doesn’t constitute an abuse of that right or ability.
Because “disapproving of” means that the right or ability doesn’t comply with the speaker’s moral values, while “abuse” means that the right or ability doesn’t comply with objectively correct moral values?
Regarding my insurance company getting to sell my organs after my death...
No. Emphatically no.
This is a very, very bad mis-incentive for the insurance company toward my continued well-being. I’d rather have the current system, where because of continually rising premium rates, the insurance company has the incentive to keep me alive for as long as possible. (Note that I do think the current system is broken as-is, but that is a discussion for another day.)
I wouldn’t classify that as abuse, but I can see how some would.
Yes, that seems like the biggest concern.
I’m not sure. There was a story a little while ago that Singapore was considering moves in this direction but it subsequently turned out to be inaccurate.
Your insurance idea is interesting, though it also sounds open to potential abuse.
Two possibilities:
a) someone rationally chooses such an action because they have no better options.
b) someone is mentally ill, depressed, etc. and drastically undervalues the future worth of their life.
I would consider the fact that a) can happen to be indicative of something fundamentally broken in the society in which it occurs—there should be better options. Of course, simply disallowing the deal doesn’t necessarily address that, merely sweeps it under the rug.
I would consider b) abuse. I consider paternalism to carry with it an intrinsic evil, but there are greater evils, and the loss of a human life because of a potentially temporary confusion is one of them
Even if another human life is saved in the process? That is after all the context here.
Sounds awesome to me. Some people get organs they need. Others get money. Even the “nightmare” scenarios only really occur when there was a pre-existing and serious problem. Usually the organ sale doesn’t make things much worse.
Not a good case. Not in any place with even a remote concern for liberty or natural rights. Unless, of course, that place also disallows inheritance; in that case, it could be argued that you don’t own your body after you die.
I remember reading about this being tried somewhere. The response was that there was less donation because people didn’t like the idea and took the “screw you!” attitude.
I don’t remember where I read this, but I can try to find it if you’d like.
I would like to see it, considering that at least two other people are saying the exact opposite.
Hmm… I can’t seem to find it =\
That’s how it works in Poland. You can opt out of organ donation if you want. Almost nobody bothers.
I don’t think “Not sending in your $200 rebate” and “not writing in an article to Overcomingbias” are the same phenomena at all.
It’s not that people who are now writing all these LW posts felt like it was too much of a hassle to send an email to Overcomingbias; it’s that deliberately and unusually sticking your neck out to contribute has a different social connotation than simply participating in the expected community behavior.
Contributing to Overcomingbias is like getting on stage: walking up to the stage is a socially loaded act in and of itself. “Hey, everyone, I’m going to stand out here and say something.” Lesswrong, since the entire site is built around community posting, practically invites you to post as you please. There’s nothing out of the ordinary about it. How could there be? The tools to do so are right there, embedded into the infrastructure of the site. It must be expected for me to do that!
I think LessWrong actually has a higher barrier for contribution—at least for articles—because you’re expected to have 20 comment karma before you can submit. This means that, if you’re honest anyway, you’ll have to spend your time in the pit interacting with people who could potentially shout you down, or call you a threat to their well-kept garden, or whatever.
I have at least 3 articles in draft format that I want to submit once I reach that total, but I don’t comment on discussions as much because most of what I would say is usually said in one comment or another. For people like me, the barrier of “must email someone” is actually easier, since discussion contribution requires a sense of knowing how the community works, intuiting a sense of what the community deems a good comment, and posting along those lines.
It may be worthwhile to publish one of them, or at least a draft for it, in Discussion; if it’s good enough, that should give you enough karma to post the following articles in Main, and if it isn’t, it’ll give you valuable feedback on how to improve them.
I don’t really have a point here, but this shouldn’t really be surprising at all, not at this moment in time.
I mean, has anyone here not used Wikipedia? (I’d also wager even odds that >=90% of you have edited WP at some point.)
EDIT: Looking back, it seems to me that what would not be surprising is, upon observing LW suddenly skyrocketing in contributors & contributed material, noticing that the sudden increase comes after a loosening of submission guidelines. When a site skyrockets, it’s for one of a few reasons: being linked by a major site like Slashdot, for example. Loosening submission guidelines is one of those few reasons.
But that’s not to say that Eliezer should have confidently expected a sudden increase just because he loosened submission criteria; the default prediction should have been that LW would continue on much as OB had been going. Lots of wikis never go anywhere, even if they let anyone edit.
Sometimes I actually catch myself reaching for the “edit this page” button when I find a typo or error on non-wiki websites.
And when I see a book with out-of-date information I grab a pencil and update it. :-)
So do I, except I actually have such a button. Here’s a bookmarklet you can use to edit any page (until you navigate away from the page): Toggle Edit Page%20{%20document.designMode%20=%20%27on%27%20}%20else%20{%20document.designMode%20=%20%27off%27}%20void%200). Drag it to your bookmarks bar to use it.
Oh Wikipedia—that reminds me—in late 1990s before Wikipedia there was “Free Online Dictionary of Computing”. The main difference between two was that you needed to email the moderators to get your changes included. The results were even more extreme than OB vs LW.
FOLDOC was the basis of a number of entries I’ve worked on. I had no idea that it was participation based! I guess that explains why the entries were so scrawny...
When the FOLDOC maintainer saw Wikipedia, he promptly gave up and said “use my stuff, you’re already doing better”—this is why he released it under GFDL, so WIkipedia could just take it.
Even after normalizing by the total number of visitors to FOLDOC and the total number of visitors to Wikipedia respectively?
Wikipedia didn’t get hundreds of millions of visitors until after it got so big.
I know it’s hard to believe, but when we started in 2001, it was a very tiny very obscure website people were commonly making fun of, and we were excited with any coverage we could get (and getting omg slashdotted—that was like news of the month).
It’s nice to see that even Eliezer can be shockingly stupid.
How is that good?
Well, it makes us feel better about ourselves? Pity about the whole FAI thing though...
Making us reap good feelings from downward social comparison.
Naughty brains, love those tricks.
We didn’t need that particular example for that—it’s just one that you immediately recognize as being shockingly stupid.
When we’re as shockingly stupid on something as the person we’re examining, we don’t notice their error, because it’s ours as well.
When you visit your friend, he says “help yourself from the kitchen”. Which read literally would give you the ability to strip the kitchen bare. Obviously it doesn’t mean that. If the friend had spoken as they meant, “take a reasonably small amount of drinks and munchies for immediate use, and not the fancy stuff or tonight’s supper”, then he would be read as being under-generous.
I suspect people run similar “what subset of their generous offer ought I to take” calculations on any wide-open offer. Taking the whole offer would be greedy.
I don’t see that as a particularly compelling explanation. OB was not just EY & RH. There were a number of other contributors, on a weekly basis even there would be non-EY/RH posts. Just look at the long list of contributors prominent in the sidebar.
If the explanation is why people will only post comments and not articles because they don’t want to take too much of what is offered, then what greater item is on offer in LW that people ‘settle’ for merely submitting articles & comments?
“The Impact of Media Censorship: Evidence from a Randomized Field Experiment in China”, Chen & Yang 2018:
Something sort-of like that already exists.
Another possible strategy is to just “lose” a small percentage (or a large percentage) of such forms submitted, on the grounds that the additional effort of remembering that you were supposed to get a rebate and calling them on it would push more people under the effort threshold.
This has happened to me, I think. Filled out a rebate for a printer, mailed it in, and… nothing. I understand companies outsource rebate processing, and so it wouldn’t surprise me if that meant perverse/anti-consumer incentives much like one sees with professional arbitrators.
Did you call them on it?
No. I’d long since lost most of the documentation, and it wasn’t worth whatever effort it would have taken. What do you do, go to small claims court? That’s at least a wasted day.
I just stopped paying attention to anything to do with rebates and price things at the original full price. Fool me once etc.
I read an article on Cracked about suicide. It seems that even the smallest of inconveniences can make a big difference in numbers of suicides, which is why guns are a terrible ideas to have easily accessible in your house. Humans are lazy. The amount of times I’ve postponed something, like moving the stuff I constantly have to walk over to reach my room, is way too high. I’d rather walk over a big crate 20 times a day than moving the things
Rebate schemes are not merely betting on consumer laziness; they are also a means of price discrimination. If you really need that $200, you’re more likely to fill out the form.
Have you read Nudge? Given that it’s the major popular source on the subject, it somehow seems incongruous to have a post with a major section on soft paternalism (they use ‘libertarian paternalism’) which doesn’t even mention it (or at least the name ‘Richard Thaler’). Save More Tomorrow is a real-life version of the pension plan you suggest (private, rather than government-run) which has had great success.
Libertarian Paternalism is almost exactly why I started to read the biases literature in the first place—it’s the application of knowledge about the way people think/behave to economics.
Paul Graham, The Other Road Ahead
This is somewhat untimely, but I just StumbledUpon this relevant article.
Another example of this might be a deadbolt on your front door; it’s sure not going to stop anyone hell bent on robbing you, but it makes it inconvenient enough that any ‘opportunist’ thieves won’t bother.
At any given time, we have many conflicting desires and motivations, that are (generally) closely balanced. Desire to fit in socially and act moral (by not being a thief) vs the desire to maximise your own circumstances (by stealing someones stuff). Desire to maximise circumstances (by filling out the rebate form) vs desire to conserve energy (by being too lazy to do it). Perhaps trivial inconsistencies tip the balance of these motivations enough so that in most people, one will be favored over the other. Adding a rebate form makes tips it just enough so that your natural laziness wins out over your desire for money.
Since our natural desires tend to come from the dumb part of our brain, this has the effect of causing us to make non-optimal decisions.
This is not quite the same. It’s more like the joke about the hikers running from the bear: the first hiker shouts “we can’t outrun a bear!”, the second hiker shouts back: “I know, but I can outrun you!”. Opportunist thieves will look for an easier target down the street, not give up and go home.
It’s not exactly the same as the other ones Yvain mentioned, but the mechanics of the situation—raising the ‘price of admission’ so that the vast majority of people are tipped in a certain direction, ie: not robbing your house—are similar enough that the same forces are probably at work.
The deadbolt also adds another disincentive; breaking into a house and stealing (breaking and entering) is a significantly more severe crime than simple burglary
I’m reminded of Slavoj Zizek’s example of “soft” coercion of sophisticated nations vs the “hard” overt coercion of communist states. He compares it to a type of parenting. There’s 2 types of ways parents might get children to visit their grandmother. The hard way is “Go visit your grandmother, it’s your responsibility, or you’ll get smacked.” The soft way is “Go visit your grandmother, she loves you so much. You never know when it’s too late and she misses you all the time. It’s your choice in the end, but a good grandson would visit his loving grandmother in old age.” It goes without saying that the latter has much more lasting power and a deep psychological impact. While the former might incite the outward action of visiting the grandmother, it does not actually attack the psychological state of the grandson (he will do it against his will because he doesn’t want to get punished).
There’s a gradual shading from “soft paternalist” solutions to bans. Making someone take an extra 5 seconds to get their choice would probably not be considered a ban by many people. What about a minute? An hour? A week? What if it takes an hour, and you’re a poor person who can’t afford to get child care for the hour, or pay the taxi fare to go to the banned store on the outskirts of town? What if buying and having the item increases your chance of being harassed by the police, without the item itself being a crime to buy or have? (This actually happens in some places with open carry of guns.) Or increases the chance of child services coming and requiring you to go through months of hearings to get your children back?
Also, “soft paternalist” solutions tend to slip. If you want to discourage people from doing something and forcing them to go out of their way to do it doesn’t discourage them enough to satisfy you, it’s very tempting and usually very politically feasible to say “well, that’s not working, so we need stronger measures like a real ban”.
His blog, The Frontal Cortex, is also interesting.
Trivial inconveniences are alive and kicking in digital piracy, where one always has to jump through hoops such as using obscure services, softwares, settings or procedures.
I suspect it is to fend off the least motivated users: numerous enough to bring attention, and most likely to expose the den in the wrong place.
Has there been any work done to quantify the effect as part of an overall cost calculation?
For example, let’s say that certain object X is sold as a subscription (buy us for 1 week, pay A, buy it for 1 month, pay B, and buy it for a year and pay C where A per day>B per day>C per day). There is obviously a certain amount of inconvenience each time I have to go through the order page.
If having access to X is necessary 2 days per month, it’s easy to calculate the best option (probably A) per year:
A/2 days * 12 months in a year
B/2 days * 12 months in a year
C /(2 days * 12 months)
Would adding the inconvenience cost be something like:
(A/2 days + transaction cost) * 12 months in a year+ cost of not having it on the days you want it because it’s too troublesome and you’re too tired * p(being too tired and exhausted) * 12
?
Examples with credit vs cash may not be quite relevant to “trivial inconveniences”. It seems to me, that the key here is, when one uses cash, they are physically giving away something material. With credit card, you just type in pin code, or sign a receipt, or whatever, but that does not register in System 1 as giving away something. So, no cash—no System 1 intervention, thus less regret on bigger numbers.
Counter example: I spend cash more frivolously than I use a card. Cash, in my head, is money that I’ve already allotted out of my available funds, and I’m more likely to be tempted to purchase trivial things if I have a wallet full of cash.
“I don’t know of any unifying psychological theory that explains our problem with trivial inconveniences.”
I suspect it is simply the combination of uncertain outcome and an opportunity cost. If I’m surfing the web and meet a wall, why would I go through even a trivial effort, when I can just hit back and click the next link? Perhaps most don’t expect higher utility from reading a blocked page than another one.
I suspect it is a form of subtle “ancestral tribe police”.
Throwing trivial inconveniences at offenders is a good way to hint they are out of line, avoiding:
Direct confrontation, with risk of fuss and escalation.
Posing as authority, with risk of dedication or consequences.
Goofing on the tribe policy, as such enforcing requires repeating and consensus.
Misunderstandings, as a dim offender will eventually just give up with no need to understand.
Please don’t comment.
It’s worse than that; I’ve thought of probably upward of a dozen article ideas that I felt momentary inspiration to write, and promptly decided not to because of the Omega-cursed post editor that takes so many more clicks to get to than a comment box and doesn’t accept the same markup.
The solution is to sidestep the technical inconveniences and write the text using whatever tools you are most comfortable with, and then, after the text is written, deal with the inconveniences. This way, the inconveniences are scheduled so that they don’t stand between you and the important part of the task. (I write my posts in LaTeX, for example.)
Huh? The ‘Create new article’ button appears on every page, so long as you’re logged in, so it’s only one click. Using different markup is annoying, but you don’t run into that until you’ve already written some stuff to add markup to.
My OS and browser are badly out of date with the consequence that article submission doesn’t work at all
I am wary of getting sucked in and ending up spending too much time on LessWrong, so this problem is failing to motivate me to get back on the upgrade treadmill. Basically, I’m unclear whether participating in LessWrong is a good use of my time, so I succumb to the temptations of superstition and treat the software problem as an ill-omen and blessing-in-disguise.
Perhaps the parallels to the Great Firewall of China are quite close. The effort required to solve a technical problem is certain, even if quite small. The pay-off is unkown. Lacking vital information one ends up reluctant to hazard even a small stake.
I have no clue how the source editor works. Fortunately, there’s an “html” button there, and I know HTML. The HTML for some of the markup is not obvious, but you can always use the buttons to insert the markup, view the HTML, and then copy-and-paste the HTML into the document you’re working on in your favorite editor.
I wouldn’t edit an important document in a form on a page anyway—my fingers know Emacs, and control-w means “delete selection” in emacs and “delete this tab and my entire form” in Firefox, so I often accidentally delete tabs when editing.
I feel like I’ve seen this (or something related) talked about elsewhere using the phrase “activation energy”.
Also, without going into specific psychological forces, defaults matter a lot.
http://www.bloomberg.com/news/2013-04-09/check-here-to-tip-taxi-drivers-or-save-for-401-k-.html
http://danariely.com/2008/05/05/3-main-lessons-of-psychology/
Curious if the book is worth it after this http://www.thedailybeast.com/articles/2013/03/01/publisher-pulls-jonah-lehrer-s-how-we-decide-from-stores.html
I would imagine not; I imagine you’d get more value out of targeted LW posts or more technical summaries written by neuroscientists / as textbooks.
Your comments, ever calling for downvoting, are like dust specks is everyone’s eye, trivial inconveniences adding up towards the threshold, where a mob of rationalists precommited berserk will cash it out as a one-person torture.
Don’t feed the trolls.
The trolls need to be publicly brutally murdered, so that their feeding ceases to be an issue.
Please don’t make any statements that could possibly be interpreted as threats or incitement to violence.
Is this a reference to evil lawyers, or is there another aspect to your argument against this obvious pun?
Ego-depletion? (Maybe not exactly right, but it seems to be in the ballpark at least...)
I don’t think rebates are strictly added to bet on laziness. It’s not always easy to change the price, so it offers some flexibility for later updates. Then there’s quarterly earnings and other such stuff to muddy up the situation. Hi Eliezer. Hi everyone.
I see no reason for this comment other than as some sort of test to see if you get voted down no matter what you say, if that’s the case then it’s not a very good test. If you absolutely have to do that sort of thing, at least try a new account or something.