Hold Off On Proposing Solutions
From Robyn Dawes’s Rational Choice in an Uncertain World.1 Bolding added.
Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem. Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested. Maier enacted an edict to enhance group problem solving: “Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.” It is easy to show that this edict works in contexts where there are objectively defined good solutions to problems.
Maier devised the following “role playing” experiment to demonstrate his point. Three employees of differing ability work on an assembly line. They rotate among three jobs that require different levels of ability, because the most able—who is also the most dominant—is strongly motivated to avoid boredom. In contrast, the least able worker, aware that he does not perform the more difficult jobs as well as the other two, has agreed to rotation because of the dominance of his able co-worker. An “efficiency expert” notes that if the most able employee were given the most difficult task and the least able the least difficult, productivity could be improved by 20%, and the expert recommends that the employees stop rotating. The three employees and . . . a fourth person designated to play the role of foreman are asked to discuss the expert’s recommendation. Some role-playing groups are given Maier’s edict not to discuss solutions until having discussed the problem thoroughly, while others are not. Those who are not given the edict immediately begin to argue about the importance of productivity versus worker autonomy and the avoidance of boredom. Groups presented with the edict have a much higher probability of arriving at the solution that the two more able workers rotate, while the least able one sticks to the least demanding job—a solution that yields a 19% increase in productivity.
I have often used this edict with groups I have led—particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately. While I have no objective criterion on which to judge the quality of the problem solving of the groups, Maier’s edict appears to foster better solutions to problems.
This is so true it’s not even funny. And it gets worse and worse the tougher the problem becomes. Take artificial intelligence, for example. A surprising number of people I meet seem to know exactly how to build an artificial general intelligence, without, say, knowing how to build an optical character recognizer or a collaborative filtering system (much easier problems). And as for building an AI with a positive impact on the world—a Friendly AI, loosely speaking—why, that problem is so incredibly difficult that an actual majority resolve the whole issue within fifteen seconds.2 Give me a break.
This problem is by no means unique to AI. Physicists encounter plenty of nonphysicists with their own theories of physics, economists get to hear lots of amazing new theories of economics. If you’re an evolutionary biologist, anyone you meet can instantly solve any open problem in your field, usually by postulating group selection. Et cetera.
Maier’s advice echoes the principle of the bottom line, that the effectiveness of our decisions is determined only by whatever evidence and processing we did in first arriving at our decisions—after you write the bottom line, it is too late to write more reasons above. If you make your decision very early on, it will, in fact, be based on very little thought, no matter how many amazing arguments you come up with afterward.
And consider furthermore that we change our minds less often than we think: 24 people assigned an average 66% probability to the future choice thought more probable, but only 1 in 24 actually chose the option thought less probable. Once you can guess what your answer will be, you have probably already decided. If you can guess your answer half a second after hearing the question, then you have half a second in which to be intelligent. It’s not a lot of time.
Traditional Rationality emphasizes falsification—the ability to relinquish an initial opinion when confronted by clear evidence against it. But once an idea gets into your head, it will probably require way too much evidence to get it out again. Worse, we don’t always have the luxury of overwhelming evidence.
I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer. To suspend, draw out, that tiny moment when we can’t yet guess what our answer will be; thus giving our intelligence a longer time in which to act.
Even half a minute would be an improvement over half a second.
1Robyn M. Dawes, Rational Choice in An Uncertain World, 1st ed., ed. Jerome Kagan (San Diego, CA: Harcourt Brace Jovanovich, 1988), 55–56.
2See Yudkowsky, “Artificial Intelligence as a Positive and Negative Factor in Global Risk.”
- Against Almost Every Theory of Impact of Interpretability by 17 Aug 2023 18:44 UTC; 322 points) (
- A Crash Course in the Neuroscience of Human Motivation by 19 Aug 2011 21:15 UTC; 203 points) (
- Problem Solving with Mazes and Crayon by 19 Jun 2018 6:15 UTC; 152 points) (
- “Rationalist Discourse” Is Like “Physicist Motors” by 26 Feb 2023 5:58 UTC; 136 points) (
- Compartmentalization in epistemic and instrumental rationality by 17 Sep 2010 7:02 UTC; 123 points) (
- How to Play a Support Role in Research Conversations by 23 Apr 2021 20:57 UTC; 105 points) (
- 3 Jun 2022 16:29 UTC; 95 points) 's comment on Intergenerational trauma impeding cooperative existential safety efforts by (
- Giving and receiving feedback by 7 Sep 2020 7:24 UTC; 94 points) (EA Forum;
- Just Try It: Quantity Trumps Quality by 4 Apr 2011 1:13 UTC; 88 points) (
- MATS Models by 9 Jul 2022 0:14 UTC; 87 points) (
- How (not) to choose a research project by 9 Aug 2022 0:26 UTC; 79 points) (
- The Core of the Alignment Problem is... by 17 Aug 2022 20:07 UTC; 76 points) (
- The Obliqueness Thesis by 19 Sep 2024 0:26 UTC; 75 points) (
- My Best and Worst Mistake by 16 Sep 2008 0:43 UTC; 70 points) (
- My Bayesian Enlightenment by 5 Oct 2008 16:45 UTC; 70 points) (
- Have the lockdowns been worth it? by 12 Oct 2020 23:35 UTC; 70 points) (
- Fake Utility Functions by 6 Dec 2007 16:55 UTC; 69 points) (
- Tabooing “Frame Control” by 19 Mar 2023 23:33 UTC; 66 points) (
- Group debugging guidelines & thoughts by 19 Oct 2020 11:02 UTC; 65 points) (
- Improving the EA-aligned research pipeline: Sequence introduction by 11 May 2021 17:57 UTC; 63 points) (EA Forum;
- Proper posture for mental arts by 31 Aug 2015 2:29 UTC; 62 points) (
- LA-602 vs. RHIC Review by 19 Jun 2008 10:00 UTC; 62 points) (
- My Advice for Incoming SERI MATS Scholars by 3 Jan 2023 19:25 UTC; 58 points) (
- Back Up and Ask Whether, Not Why by 6 Nov 2008 19:20 UTC; 54 points) (
- Book review: Xenosystems by 16 Sep 2024 20:17 UTC; 48 points) (
- Working Mantras by 24 Aug 2009 22:08 UTC; 48 points) (
- Principled Satisficing To Avoid Goodhart by 16 Aug 2024 19:05 UTC; 45 points) (
- Constraints & Slackness Reasoning Exercises by 21 May 2019 22:53 UTC; 42 points) (
- Magic Tricks Revealed: Test Your Rationality by 13 Aug 2011 5:23 UTC; 42 points) (
- Rudimentary Categorization of Less Wrong Topics by 5 Sep 2015 7:32 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- A Case Study of Motivated Continuation by 31 Oct 2007 1:27 UTC; 35 points) (
- Narrow your answer space by 28 Dec 2010 11:38 UTC; 33 points) (
- Support vs Advice & Holding Off Solutions by 23 Feb 2021 1:12 UTC; 29 points) (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- You don’t need to be a genius to be in AI safety research by 10 May 2023 22:23 UTC; 28 points) (EA Forum;
- Exploring the Idea Space Efficiently by 8 Apr 2012 4:28 UTC; 28 points) (
- Blatant lies are the best kind! by 3 Jul 2019 20:45 UTC; 28 points) (
- Don’t call yourself a rationalist. by 14 Oct 2011 20:26 UTC; 28 points) (
- 6 Mar 2024 16:53 UTC; 25 points) 's comment on Research Report: Sparse Autoencoders find only 9/180 board state features in OthelloGPT by (
- A Genius for Destruction by 1 Aug 2008 19:25 UTC; 25 points) (
- 2 Jan 2012 12:09 UTC; 23 points) 's comment on For-Profit Rationality Training by (
- 27 Apr 2010 7:05 UTC; 23 points) 's comment on Proposed New Features for Less Wrong by (
- My Heartbleed learning experience and alternative to poor quality Heartbleed instructions. by 15 Apr 2014 8:15 UTC; 21 points) (
- Why there are no online CFAR workshops? by 5 Sep 2021 15:02 UTC; 20 points) (
- Delayed Solutions Game by 9 Dec 2010 5:12 UTC; 20 points) (
- Building case-studies of akrasia by 14 Dec 2011 18:42 UTC; 19 points) (
- That Crisis thing seems pretty useful by 10 Apr 2009 17:10 UTC; 18 points) (
- Bite Sized Tasks by 4 Mar 2023 3:31 UTC; 18 points) (
- 12 Jun 2013 20:53 UTC; 17 points) 's comment on Do Earths with slower economic growth have a better chance at FAI? by (
- Seeking information relevant to deciding whether to try to become an AI researcher and, if so, how. by 11 Jun 2012 12:23 UTC; 17 points) (
- Research exercise: 5-minute inside view on how to reduce risk of nuclear war by 23 Oct 2022 12:42 UTC; 16 points) (EA Forum;
- 5 Sep 2011 19:39 UTC; 16 points) 's comment on The Fatal Gift of Beauty: The Trials of Amanda Knox by (
- HPMOR: What could’ve been done better? by 28 Jan 2012 13:31 UTC; 15 points) (
- You don’t need to be a genius to be in AI safety research by 6 May 2023 2:32 UTC; 14 points) (
- 2 Sep 2012 21:10 UTC; 14 points) 's comment on Dealing with trolling and the signal to noise ratio by (
- 11 Dec 2011 4:50 UTC; 14 points) 's comment on An akrasia case study by (
- 26 Oct 2011 17:10 UTC; 13 points) 's comment on [MORESAFE] Starting global risk discussion by (
- 18 Mar 2010 7:30 UTC; 13 points) 's comment on Sequential Organization of Thinking: “Six Thinking Hats” by (
- Wiki Spam by 12 Nov 2011 0:26 UTC; 13 points) (
- 1 Oct 2010 14:13 UTC; 13 points) 's comment on Reflections on a Personal Public Relations Failure: A Lesson in Communication by (
- 22 Dec 2012 20:02 UTC; 12 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 18, chapter 87 by (
- 12 Jun 2024 17:17 UTC; 12 points) 's comment on Searching for the Root of the Tree of Evil by (
- Consider Representative Data Sets by 6 May 2009 1:49 UTC; 12 points) (
- 2 Apr 2015 2:58 UTC; 11 points) 's comment on April Open Thread by (EA Forum;
- 9 Nov 2021 0:20 UTC; 11 points) 's comment on Speaking of Stag Hunts by (
- My Hammer Time Final Exam by 17 May 2024 9:28 UTC; 10 points) (
- 8 Nov 2010 3:43 UTC; 10 points) 's comment on An Xtranormal Intelligence Explosion by (
- 20 Jun 2010 11:33 UTC; 10 points) 's comment on Rationality & Criminal Law: Some Questions by (
- [Link, 2011] Team may be chosen to receive $1.4 billion to simulate human brain by 9 Mar 2012 21:13 UTC; 10 points) (
- 1 Mar 2009 20:11 UTC; 9 points) 's comment on The Most Frequently Useful Thing by (
- 11 Nov 2021 13:09 UTC; 9 points) 's comment on Book Review Review (end of the bounty program) by (
- 25 Aug 2022 0:06 UTC; 9 points) 's comment on Beliefs and Disagreements about Automating Alignment Research by (
- First LW-Meetup in Germany by 10 Jul 2011 8:13 UTC; 9 points) (
- [SEQ RERUN] Hold Off On Proposing Solutions by 29 Sep 2011 3:06 UTC; 9 points) (
- TSR #8 Operational Consistency by 3 Jan 2018 2:11 UTC; 9 points) (
- EA database/reading list: Why it might be useful by 26 Jul 2016 5:02 UTC; 8 points) (EA Forum;
- 14 Feb 2011 4:56 UTC; 8 points) 's comment on BOOK DRAFT: ‘Ethics and Superintelligence’ (part 1) by (
- 26 Feb 2014 22:26 UTC; 8 points) 's comment on Open Thread February 25 - March 3 by (
- 11 Jun 2022 19:48 UTC; 8 points) 's comment on Godzilla Strategies by (
- Rationality Reading Group: Part I: Seeing with Fresh Eyes by 9 Sep 2015 23:40 UTC; 8 points) (
- 20 Sep 2012 13:10 UTC; 8 points) 's comment on Elitism isn’t necessary for refining rationality. by (
- 22 Jul 2013 17:54 UTC; 8 points) 's comment on Open thread, July 23-29, 2013 by (
- 7 Jun 2014 11:58 UTC; 8 points) 's comment on [meta] Policy for dealing with users suspected/guilty of mass-downvote harassment? by (
- Link: Collective Intelligence by 5 Jan 2011 8:15 UTC; 8 points) (
- 6 Mar 2024 20:51 UTC; 7 points) 's comment on Research Report: Sparse Autoencoders find only 9/180 board state features in OthelloGPT by (
- 21 Apr 2012 23:22 UTC; 7 points) 's comment on Against the Bottom Line by (
- 12 Feb 2015 6:59 UTC; 7 points) 's comment on Open thread, Feb. 9 - Feb. 15, 2015 by (
- 13 Jan 2014 0:15 UTC; 7 points) 's comment on AALWA: Ask any LessWronger anything by (
- SI and Social Business by 7 Nov 2011 23:25 UTC; 7 points) (
- An inducible group-”meditation” for use in rationality dojos by 2 Jan 2012 10:32 UTC; 7 points) (
- 17 Jul 2009 0:28 UTC; 6 points) 's comment on Absolute denial for atheists by (
- 17 Sep 2010 9:12 UTC; 6 points) 's comment on Intelligence Amplification Open Thread by (
- 9 May 2016 9:09 UTC; 6 points) 's comment on Open Thread May 2 - May 8, 2016 by (
- 19 Sep 2011 15:50 UTC; 6 points) 's comment on Consolidated link thread, September 2011 by (
- 2 May 2023 0:23 UTC; 6 points) 's comment on The Apprentice Thread 2 by (
- 1 Mar 2010 19:21 UTC; 5 points) 's comment on Open Thread: March 2010 by (
- 15 Dec 2020 12:13 UTC; 5 points) 's comment on To listen well, get curious by (
- 19 Aug 2010 16:44 UTC; 5 points) 's comment on Should humanity give birth to a galactic civilization? by (
- 29 Nov 2016 18:12 UTC; 5 points) 's comment on Expert Prediction Of Experiments by (
- 19 Mar 2011 3:29 UTC; 5 points) 's comment on Can we stop using the word “rationalism”? by (
- 20 Mar 2010 10:23 UTC; 5 points) 's comment on Think Before You Speak (And Signal It) by (
- 13 Mar 2021 4:40 UTC; 5 points) 's comment on [Lecture Club] Awakening from the Meaning Crisis by (
- 2 May 2009 20:50 UTC; 5 points) 's comment on The mind-killer by (
- What is the best programming language? by 26 May 2012 0:58 UTC; 4 points) (
- 1 Jul 2014 16:20 UTC; 4 points) 's comment on Group Rationality Diary, July 1-15 by (
- 27 Apr 2010 11:42 UTC; 4 points) 's comment on Attention Less Wrong: We need an FAQ by (
- 14 Feb 2011 6:48 UTC; 4 points) 's comment on The Urgent Meta-Ethics of Friendly Artificial Intelligence by (
- 10 Dec 2011 17:59 UTC; 4 points) 's comment on [SEQ RERUN] The Two-Party Swindle by (
- 25 Jun 2014 23:46 UTC; 4 points) 's comment on [LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses “Common Misunderstandings” about Philosophy by (
- Meetup : Madison: Reading Group, Seeing with Fresh Eyes by 12 Sep 2012 2:56 UTC; 4 points) (
- 12 Mar 2011 21:03 UTC; 4 points) 's comment on Tweetable Rationality by (
- 11 Jan 2016 12:54 UTC; 4 points) 's comment on Are we failing the ideological Turing test in the case of ISIS? (a crazy ideas thread) by (
- 23 Jan 2015 19:47 UTC; 4 points) 's comment on Group Rationality Diary, January 16-31 by (
- 23 Nov 2010 19:42 UTC; 4 points) 's comment on Rationality is Not an Attractive Tribe by (
- 8 Sep 2011 12:56 UTC; 3 points) 's comment on [Question] What’s your Elevator Pitch For Rationality? by (
- 27 Sep 2017 21:23 UTC; 3 points) 's comment on Why I am not a Quaker (even though it often seems as though I should be) by (
- 30 Aug 2012 0:58 UTC; 3 points) 's comment on Cynical explanations of FAI critics (including myself) by (
- 14 Jul 2009 11:48 UTC; 3 points) 's comment on Good Quality Heuristics by (
- 13 Feb 2011 6:59 UTC; 3 points) 's comment on Bridging Inferential Gaps and Explaining Rationality to Other People by (
- 25 Aug 2011 6:13 UTC; 3 points) 's comment on Kill the mind-killer by (
- 8 Nov 2011 22:18 UTC; 3 points) 's comment on Looking for Article by (
- 10 Dec 2011 18:47 UTC; 3 points) 's comment on Example of poor decision making under pressure (from game show) by (
- 9 Nov 2016 19:24 UTC; 3 points) 's comment on Open thread, Nov. 7 - Nov. 13, 2016 by (
- 19 Feb 2008 6:51 UTC; 3 points) 's comment on Categorizing Has Consequences by (
- 28 Dec 2011 14:25 UTC; 3 points) 's comment on How to Draw Conclusions Like Sherlock Holmes by (
- 31 Jan 2013 8:47 UTC; 2 points) 's comment on Theism, Wednesday, and Not Being Adopted by (
- Rationality Verification Opportunity? by 15 Dec 2011 22:11 UTC; 2 points) (
- 24 Dec 2011 5:55 UTC; 2 points) 's comment on Applied Rationality Practice by (
- 11 Oct 2023 22:12 UTC; 2 points) 's comment on Nonspecific discomfort by (
- 15 Sep 2009 18:17 UTC; 2 points) 's comment on “I don’t know.” by (
- 21 Feb 2013 19:15 UTC; 2 points) 's comment on LW Women: LW Online by (
- Meetup : West LA: Problem Solving by 9 Aug 2015 23:20 UTC; 2 points) (
- 26 Apr 2011 6:31 UTC; 2 points) 's comment on [SEQ RERUN] “I don’t know.” by (
- 7 Apr 2016 21:39 UTC; 2 points) 's comment on Open Thread April 4 - April 10, 2016 by (
- 31 Mar 2018 12:48 UTC; 1 point) 's comment on Hazard’s Shortform Feed by (
- 13 May 2011 1:49 UTC; 1 point) 's comment on Holy Books (Or Rationalist Sequences) Don’t Implement Themselves by (
- 22 Dec 2020 14:42 UTC; 1 point) 's comment on To listen well, get curious by (
- 17 Dec 2012 11:40 UTC; 1 point) 's comment on Parallelizing Rationality: How Should Rationalists Think in Groups? by (
- 12 Aug 2010 16:10 UTC; 1 point) 's comment on Five-minute rationality techniques by (
- SI and Social Business by 7 Nov 2011 23:05 UTC; 1 point) (
- 1 Feb 2012 6:38 UTC; 1 point) 's comment on . by (
- 19 Aug 2020 17:22 UTC; 1 point) 's comment on On Creativity—The joys of 5 minute timers by (
- 19 Oct 2011 17:30 UTC; 1 point) 's comment on The Need for Universal Experience Classes by (
- 20 Jun 2011 18:00 UTC; 1 point) 's comment on Why No Wireheading? by (
- 3 Jun 2014 2:49 UTC; 1 point) 's comment on Political ideas meant to provoke thought by (
- 8 Jan 2013 4:58 UTC; 1 point) 's comment on Second-Order Logic: The Controversy by (
- 7 Apr 2011 17:25 UTC; 1 point) 's comment on Recent de-convert saturated by religious community; advice? by (
- 13 Aug 2011 20:47 UTC; 1 point) 's comment on Magic Tricks Revealed: Test Your Rationality by (
- 15 Oct 2010 19:15 UTC; 1 point) 's comment on Recommended Reading for Friendly AI Research by (
- 27 Sep 2013 1:00 UTC; 1 point) 's comment on AI ebook cover design brainstorming by (
- 3 May 2009 3:26 UTC; 1 point) 's comment on The mind-killer by (
- 23 May 2009 22:12 UTC; 0 points) 's comment on Off Topic Thread: May 2009 by (
- 12 Oct 2010 22:05 UTC; 0 points) 's comment on Discuss: Meta-Thinking Skills by (
- 12 Jan 2012 17:39 UTC; 0 points) 's comment on Rational Justice by (
- 11 Dec 2010 0:37 UTC; 0 points) 's comment on How To Lose 100 Karma In 6 Hours—What Just Happened by (
- 14 Jul 2011 20:29 UTC; 0 points) 's comment on Rationalist approach to developing Writing skills by (
- 5 Mar 2013 19:02 UTC; 0 points) 's comment on A Rationalist’s Account of Objectification? by (
- 29 Nov 2011 1:09 UTC; 0 points) 's comment on [LINK] Scientists create mammalian H5N1 by (
- 5 Jan 2016 19:24 UTC; 0 points) 's comment on Open Thread, January 4-10, 2016 by (
- 11 Apr 2011 16:04 UTC; 0 points) 's comment on Just Try It: Quantity Trumps Quality by (
- 20 Dec 2012 18:44 UTC; 0 points) 's comment on Notes on Psychopathy by (
- 7 Nov 2011 23:07 UTC; 0 points) 's comment on Rationalist sites worth archiving? by (
- 7 Nov 2011 23:20 UTC; 0 points) 's comment on Rationalist sites worth archiving? by (
- 31 May 2011 17:32 UTC; 0 points) 's comment on Making projects happen by (
- 26 Jan 2012 2:45 UTC; -2 points) 's comment on [post redacted] by (
- How To Construct a Political Ideology by 21 Jul 2013 15:00 UTC; -4 points) (
- Choose that which is most important to you by 21 Jul 2013 22:38 UTC; -6 points) (
- 12 Apr 2013 2:10 UTC; -8 points) 's comment on LW Women Submissions: On Misogyny by (
What circles do you run in Eliezer? I meet a fair number of people who work in AI, (you can say I “work in AI” myself) and so far I can’t think of a single person who was sure of a way to build general intelligence. Is this attitude you observe a common one among people who aren’t actually doing AI research, but who think about AI?
Oh, I’m not talking about the mainstream AI field. Most of them know better. I mean, say, a random middle or upper-class individual in Silicon Valley, or a random user on an IRC channel.
However, the rule about instantly solving Friendly AI may apply even within the AI field, since it’s a more difficult problem.
It’s obvious how to build AI. You just add complexity. AIs need complexity. :-)
And some emergent properties for sure!
And a randomness-adder :)
I’ve just finished a 3-day training course on TRIZ (http://en.wikipedia.org/wiki/TRIZ) a problem solving technique, one of the recurring themes throughout the course was what to do about all the solutions that come out even before you’ve figured out what the true problem is you’re trying to solve. The advice was to write the solutions down (rather than be diverted by them or try to bat them away), use them to help examine the problem a bit more and then carry on until you have enough information to make useful judgements about all the solutions you’ve generated; this was very helpful advice. You need to have a sound way of formulating and exploring the problem space, as well as generating solutions, otherwise you’ll become too distracted by all the great solutions your brain is generating.
I just want to remark that it is far from obvious on apriori grounds that there is no elegant general AI algorithm that will solve all the other problems quite nicely. We’ve only learned this by the continued failure to find such an algorithm or anything like it by the AI community and the continued small successes of more specific less elegant approaches.
AI’s need Emergence too. Make sure to add some of that to the soup ;^)
X3J13, the ANSI committee that standarised Common Lisp, had many problems to solve. Kent Pitman credits Larry Masinter with imposing the disciple of seperating problem descriptions from proposed solutions and gives insights into what that meant in practise in a post to comp.lang.lisp
http://tinyurl.com/2hppgs
The general interest lies in that fact that the X3J13 Issues were all written up and are available on line.
http://www.lispworks.com/documentation/HyperSpec/Front/X3J13Iss.htm
or
http://www.lisp.org/HyperSpec/FrontMatter/X3J13-Issues.html
so if you wish to study how this works there is a resource you can analyse.
I should confess that my interest has been in content not process. I have been reading these issues to learn Common Lisp. Are these pages really a useful resource for scholars wishing to study the separation of problem descriptions from proposed solutions? I don’t know.
I think this argument is flawed with respect to the more technology-oriented questions. Most people do not seriously claim to solve AI problems. What most people (like myself) who are slightly educated in the field (I did an undergrad minor in AI, just very simple stuff) will do is they will suggest an approach that they would try if they had to start working on it. Technical questions also usually yield to evidence very quickly whenever it matters, i.e., when someone would start burning money on an implementation. That is not to say some time and resources are not to be saved by using the maxim outlined here.
OTOH, the part about economists is valid, since most people have very strong ideas (usually wrong ones) about what will work, e.g., as a policy. But then again, most people have no way of wasting (other peoples’) resources based on these faulty ideas.
No, wait...
The latest of a number of really good posts from you that directly address the concern of this blog. You seem to be really starting to “grok” the terrifying reality of just how biased we are by the very nature of our thought processes, and coming up with good and useful steps to reduce those biases. Nicely done.
This post makes me wonder how much time passed for Eliezer between concluding that a technological singularity was a probable part of the future and deciding that creating an AGI was the best response, and likewise how much time passed between concluding that AGI Friendlyness would be a difficult problem and concluding that working on a theory of AGI Friendlyness was the best response.
Eliezer, I get the impression that your recent blog entries will make me a better rationalist or if not that a better inventor of software, organizational innovations and social arrangements that will help people become better rationalists.
Good stuff, I say.
A surprising number of people I meet seem to know exactly how to build an Artificial General Intelligence, without, say, knowing how to play the guitar or juggle (much easier problems).
Yes, but while those two topics may be interesting to me, other “easy” problems (home and car maintenance, farming) are not so much even though I recognize their importance. I’m not going to learn how to do everything basic before I am going to learn something complicated. Am I?
Is an AI?
And these problems aren’t even easy, really. Like the person who knows how to make an AI, one imagines they “know” how to play guitar. There’s a competence level and there is a deeper mastery/creation level. I know three chords; I am not .
Unless that was your point.
My AI will play the guitar and juggle so I won’t have to.
This advice seems the opposite of, “avoid analysis paralysis.” These may be bounding two extremes, neither of which is healthy. Or I may simply be wrong about the relationship.
Playing the guitar has human-aesthetic components so it’s a subproblem of Friendly AI, not just AGI. Building an AI that juggles is a valid challenge. As for trying to do it yourself, that quite misses the point. A mathematician may not be able to do high-speed mental arithmetic, but ought to know how to build a calculator.
I remember reading something much like this in I am right and you are wrong by Edward de Bono, who as I recall wrote that we should try to hold on to the “I haven’t made my mind up” state much longer than we do, and be prepared to say “I don’t know” much more often than we do (I think he even proposed a new word we could use to answer questions with that meant we don’t have a reason to think either way yet). This was about 15 years ago so I’ve probably mis-remembered.
I was a philosophy undergrad at the time, and when I asked my tutors about de Bono, they told me he was a vacuous ‘self-help’ nitwit I should ignore.
“My Ap distribution is rather flat.”
Hm, MADIRF? :)
Completely useless methods for building a general intelligence:
Method 1: Put some bacteria on a lifeless planet with liquid water. Wait until one evolves.
Method 2: Find a fertile human of each gender and induce them to mate. Wait nine months.
Luis Enrique, See above about “We Change Our Minds Less Often Than We Think”; my interpretation is that the people are trying to believe that they haven’t made up their minds, but they are wrong. That is, they seem to be implementing the (first) advice you mention. Maybe one can come up with more practical advice, but these are very difficult problems to fix, even if you understand the errors. On the other hand, the main part of the post is about a successful intervention.
Constant, regarding “analysis paralysis,” keep in mind there are often two separate questions:
How much time should I spend thinking about X?
Given I’m allocating T time to think about X, how should I divide up T among different thought subtasks?
Analysis Paralysis would generally be a problem with (1).
The current blog post applies more to (2). In the Maier example, the participants presumably know they have a sizable chunk of time blocked out, and the experimental group presumably gets better results not by spending more time overall, but because they reserved a good chunk of T to spend learning the problem, without committing right away to a solution.
The notion of delaying proposition of ‘solutions’ as long as possible seems an excellent technique for group work where stated propositions not only appear prematurely but become entangled with other, perhaps unproductive interpersonal dynamics, and where the energy of the deliberately ‘unmade up’ group mind can possibly assist the individual to internally change position. The thorny bit for me however, is the individual trying to ‘hold that non-thought’ - a challenge that is more or less equivalent to stopping, or even slowing the thought process deliberately, which is meditation after all—something we mere mortals haven’t found all that easy so far. Indeed, some argue that many of us aren’t even aware there is an ‘internal dialogue’, let alone knowing how to stop it. In other words, it’s easy to say don’t make up your mind, but not so easy to enact.
It’s okay to think up solutions. You just have to write them down and refocus on the problem.
This is how a brainstorming session is supposed to work. The main goal of the facilitator is to keep the group criticism from spinning out of control. Usually, if someone proposes a solution, someone will shout out an objection to it. But we should still be thinking about the problem. Just write down the solution and shush the objection, then return to the problem.
″...human mind is a lot like the human egg, and the human egg has a shut-off device. When one sperm gets in, it shuts down so the next one can’t get in. The human mind has a big tendency of the same sort.”
Charlie Munger
http://vinvesting.com/docs/munger/human_misjudgement.html
I agree. I really hate our notion that “you shouldn’t bring up a problem unless you have a solution”.
It is obvious to anyone that solves problems that we should analyze the problem before letting our minds move on to a solution.
The people advocating that might be confusing analysis with politics. It’s annoying when someone criticises your political idea but offers no alternative; it feels (sometimes accurately) that they’re disrupting the conversation but offering no input. So in a political debate, a ground rule might be “don’t criticise my solution if you don’t have a solution of your own”.
Rationally, however, that doesn’t excuse not assessing the solution. And it’s also important to remember that one potential solution is “do nothing” or “carry on doing what we were doing already”. So, in most cases, ANY new solution had an alternative solution to which it can be compared.
That’s like arguing food doesn’t taste good because I can’t prove it.
...what? How is the ‘question’ of whether food tastes good even related to this? It’s nothing like a problem needing a solution.
Please stop misusing the word edict.
Are you sure of that citation? I just looked for it in a copy of Dawes’s “Rational Choice in an Uncertain World” and again with the full text search in Google books
http://books.google.com/books/about/Rational_choice_in_an_uncertain_world.html?id=rcU1BsfrM2kC
and did not find any mention of Maier’s work. Also, though Maier does frequently use the “Changing Work Procedures” problem, I haven’t turned up any publication by him that matches this description. (Note that this failure is quite possibly mine; I haven’t done an exhaustive search).
-- MarkusQ
I’m thinking perhaps it is this book by Norman R.F. Maier:
Problem Solving Discussions and Conferences, published by McGraw-Hill Education (December 1963).
Does anyone know of more recent journal article on the topic, ‘wait before proposing solutions’?
“why, that problem is so incredibly difficult that an actual majority resolve the whole issue within 15 seconds.”, “We Change Our Minds Less Often Than We Think” and “Cached Thoughts”...
Right. We don’t do a lot of “our” thinking ourselves. We aren’t individually sentient, not really. We don’t notice it, but the actual thinking is going on in our subcultures. The sad and funny thing is, we don’t even try to understand the cognition of our subcultures, when we research cognition.
I think I’m sentient. If you’re not sentient, I would surmise that you believe you’re lucky enough to be in a competent subculture—one self-aware enough to bring this realization to you.
Could one devise a series of experiments to show that individuals aren’t sentient, but “subcultures” are?
We do less thinking that we imagine, but we still think. However, I still argee (to a lesser extent) that (sub)cultures fixed many thoughts of many people.
I find 2 possible meaning of “we” here, but the sentence is false in both senses:
“We” = all of humanity: The “cognition of subcultures” sounds like half Anthropology and half Psychology, and I imagine it has been researched.
“We” = individuals, rationalists: If your goal is to think by yourself, it is minimally useful to understand how culture “think” for you. Knowing how to not let culture think for you is enough.
This is one of the techniques I’ve always thought sounded really useful, but never had a clear enough picture of to implement for myself. Does anyone have an example (a transcript, or something of the like) of groups and/or individuals successfully discussing a problem for 5 or 10 minutes without proposing any solutions? I have trouble imagining what that would look like.
No transcript. But I do this professionally all the time. Clients frequently come to me with a design in mind for a solution, and it’s often important to back them up and get them to tell me what the problem actually is.
Usually, I start with the question “How would you be able to tell that this problem had been solved?” and repeat it two or twenty times in different words until someone actually tries to answer it.
On one occasion I handed a client my pen and asked whether it was a solution to their problem. They looked at me funny and said it wasn’t. I asked them how they knew that, and after a while one of them said “well, for one thing, it doesn’t do X” and I said “great!”, took the pen back, and wrote “has to do X”. Then I handed them the pen back and said “OK, suppose I add the ability to do X somehow to this pen. Is it a solution to your problem now?” and after a couple of iterations they got it and started actually telling me what their problem was.
The thing that used to astonish me is how often the proposed solution utterly fails to even address the problem articulated by the same person who proposed the solution. I’ve come to expect it.
Bleakly funny. Thanks for that. I usually retreat (probably with an angry or pained look on my face) when I notice I’m not really being heard. But sometimes it’s better to play and explore.
(nods) It’s kind of critical in a systems engineering role.
Only vaguely relatedly, one of my favorite lines ever came from my first professional mentor, about a design he was proposing: “It does what you expect, but you have to expect the right things.”
What a true and hilarious depiction of life. I have the exact same problem doing web development. Because the people giving me projects are not IT people they tend to come up with totally dysfunctional solutions. Yet they almost always start by telling me how they want the problem solved. I have to dig to find out what the problem is first but I just ask them “What result do you want?” or “What purpose do you want this to serve?” and say “I can’t make it serve the purpose without knowing what the purpose is.” That works for me, without me having to ask them 20 times. Then again maybe you’re doing projects in radically different contexts all the time, or with completely different people who vary in their ability to see the point in answering that question. I work with a limited number of people and contexts, all of which I understand pretty well, so my problem clarification process is pretty simple.
In my experience as a programmer (who wore all the software-related hats), I found that even when I understood the domain quite well, inquired about the purpose multiple times, and wrote little stories illustrating my interpretation of the users’ desires, I could walk away from early usability tests with major changes to the project.
In one particularly memorable instance, I got all the way through making paper prototypes and making pretend e-mails. Then, I convinced my manager to try out the system. The process started in a pre-existing e-mail package and then routed stuff to the proposed custom software. He sat down, opened up the pretend e-mail, and started to save the attached files. At that point, we discovered that there was no need for the custom software and killed the entire project.
Yeah, it’s different people and a different context every time.
I have attempted using this in more casual decision making situations, and the response I get is nearly always something along the lines of “Okay, just let me propose this one solution, we won’t get attached to it or anything, just hear me out...”
What do you do in this situation? Let them speak? Ask them to write down their solution, to be discussed later?
Oops… Couldn’t resist proposing solutions.
To be perfectly honest, at the time I simply planted my face on the table in front of me a few times. I was at a dinner party with friends of my mother’s; I would have sounded extremely condescending otherwise.
Ah yes, status mismatch in a not very rational crowd. Not much you can do there.
There’s a comment already asking for more modern articles/citations/research on this topic but in case someone wants to run with this idea in real life, you can find a summary of Norman Maier’s research at http://www.iaf-world.org/Libraries/IAF_Journals/Assets_and_Liabilities_in_Group_Problem_Solving.sflb.ashx
The article was written by Norman Maier in 1967 and reprinted in Psychological Review in 1999. For those of you with access to well-funded libraries, the citations are:
Psychological Review, Volume 74, Number 4, Pages 239-249.
Group Facilitation: A Research and Applications Journal — Volume 1, Number 1, Winter 1999, Pages 45-51
And, to be on really solid ground, you’d want the actual source article(s) that the above review refers to. They are:
Hoffman, L. R., & Maier, N. R. F. The use of group decision to resolve a problem of fairness. Personnel Psychology, 1959, 12, 545-559
Maier, N. R. F. Screening solutions to upgrade quality: A new approach to problem solving under conditions of uncertainty. Journal of Psychology, 1960, 49, 217-231.
Maier, N. R. F. Problem solving discussions and conferences: Leadership methods and skills. New York: McGraw-Hill, 1963.
Maier, N. R. F., & Hayes, J. J. Creative management. New York: Wiley, 1962.
Solem, A. R. 1965: Almost anything I can do, we can do better. Personnel Administration, 1965, 28, 6-16.
sd
From which edition of the book does the reference originate? At first glance it does not seem to be included in the second edition and I’m curious to read more about it.
https://intelligence.org/files/AIPosNegFactor.pdf
is the missing citations
I think the main important lesson is to not get attached to early ideas. Instead of banning early ideas, if anything comes up, you can just write tit down, and set it aside. I find this easier than a full ban, because it’s just an easier move to make for my brain.
(I have a similar problem with rationalist taboo. Don’t ban words, instead require people to locally define their terms for the duration of the conversation. It solves the same problem, and it isn’t a ban on though or speech.)
The other important lesson of the post, is that, in the early discussion, focus on increasing your shared understanding of the problem, rather than generating ideas. I.e. it’s ok for ideas to come up (and when they do you save them for later). But generating ideas is not the goal in the beginning.
Hm, thinking about it, I think the mechanism of classical brainstorming (where you up front think of as many ideas as you can) is to exhaust all the trivial, easy to think of, ideas, as fast as you can, and then you’re forced to think deeper to come up with new ideas. I guess that’s another way to do it. But I think this is method is both ineffective and unreliable, since it only works though a secondary effect.
. . .
It is interesting to comparing the advise in this post with the Game Tree of Aliment or Builder/Breaker Methodology also here. I’ve seen variants of this exercise popping lots of places in the AI Safety community. Some of them pare probably inspired by each other, but I’m pretty sure (80%) that this method have been invented several times independently.
I think that GTA/BBM works for the same reason the advice in the post works. It also solves the problem of not getting attached, and also as you keep breaking your ideas and explore new territory, you expand your understanding or the problem. I think an active ingrediens in this method is that the people playing this game knows that alignment is hard, and go in expecting their first several ideas to be terrible. You know the exercise is about noticing the flaws in your plans, and learn from your mistakes. Without this attitude, I don’t think it would work very well.