Is That Your True Rejection?
It happens every now and then that someone encounters some of my transhumanist-side beliefs—as opposed to my ideas having to do with human rationality—strange, exotic-sounding ideas like superintelligence and Friendly AI. And the one rejects them.
If the one is called upon to explain the rejection, not uncommonly the one says, “Why should I believe anything Yudkowsky says? He doesn’t have a PhD!”
And occasionally someone else, hearing, says, “Oh, you should get a PhD, so that people will listen to you.” Or this advice may even be offered by the same one who expressed disbelief, saying, “Come back when you have a PhD.”
Now, there are good and bad reasons to get a PhD. This is one of the bad ones.
There are many reasons why someone might actually have an initial adverse reaction to transhumanist theses. Most are matters of pattern recognition, rather than verbal thought: the thesis calls to mind an associated category like “strange weird idea” or “science fiction” or “end-of-the-world cult” or “overenthusiastic youth.”1 Immediately, at the speed of perception, the idea is rejected.
If someone afterward says, “Why not?” this launches a search for justification, but the search won’t necessarily hit on the true reason. By “’true reason,” I don’t mean the best reason that could be offered. Rather, I mean whichever causes were decisive as a matter of historical fact, at the very first moment the rejection occurred.
Instead, the search for justification hits on the justifying-sounding fact, “This speaker does not have a PhD.” But I also don’t have a PhD when I talk about human rationality, so why is the same objection not raised there?
More to the point, if I had a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say. Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point.
They would say, “Why should I believe you? You’re just some guy with a PhD! There are lots of those. Come back when you’re well-known in your field and tenured at a major university.”
But do people actually believe arbitrary professors at Harvard who say weird things? Of course not.
If you’re saying things that sound wrong to a novice, as opposed to just rattling off magical-sounding technobabble about leptical quark braids in N + 2 dimensions; and if the hearer is a stranger, unfamiliar with you personally and unfamiliar with the subject matter of your field; then I suspect that the point at which the average person will actually start to grant credence overriding their initial impression, purely because of academic credentials, is somewhere around the Nobel Laureate level. If that. Roughly, you need whatever level of academic credential qualifies as “beyond the mundane.”
This is more or less what happened to Eric Drexler, as far as I can tell. He presented his vision of nanotechnology, and people said, “Where are the technical details?” or “Come back when you have a PhD!” And Eric Drexler spent six years writing up technical details and got his PhD under Marvin Minsky for doing it. And Nanosystems is a great book. But did the same people who said, “Come back when you have a PhD,” actually change their minds at all about molecular nanotechnology? Not so far as I ever heard.
This might be an important thing for young businesses and new-minted consultants to keep in mind—that what your failed prospects tell you is the reason for rejection may not make the real difference; and you should ponder that carefully before spending huge efforts. If the venture capitalist says, “If only your sales were growing a little faster!” or if the potential customer says, “It seems good, but you don’t have feature X,” that may not be the true rejection. Fixing it may, or may not, change anything.
And it would also be something to keep in mind during disagreements. Robin Hanson and I share a belief that two rationalists should not agree to disagree: they should not have common knowledge of epistemic disagreement unless something is very wrong.2
I suspect that, in general, if two rationalists set out to resolve a disagreement that persisted past the first exchange, they should expect to find that the true sources of the disagreement are either hard to communicate, or hard to expose. E.g.:
Uncommon, but well-supported, scientific knowledge or math;
Long inferential distances;
Hard-to-verbalize intuitions, perhaps stemming from specific visualizations;
Zeitgeists inherited from a profession (that may have good reason for it);
Patterns perceptually recognized from experience;
Sheer habits of thought;
Emotional commitments to believing in a particular outcome;
Fear that a past mistake could be disproved;
Deep self-deception for the sake of pride or other personal benefits.
If the matter were one in which all the true rejections could be easily laid on the table, the disagreement would probably be so straightforward to resolve that it would never have lasted past the first meeting.
“Is this my true rejection?” is something that both disagreers should surely be asking themselves, to make things easier on the other person. However, attempts to directly, publicly psychoanalyze the other may cause the conversation to degenerate very fast, from what I’ve seen.
Still—“Is that your true rejection?” should be fair game for Disagreers to humbly ask, if there’s any productive way to pursue that sub-issue. Maybe the rule could be that you can openly ask, “Is that simple straightforward-sounding reason your true rejection, or does it come from intuition-X or professional-zeitgeist-Y ?” While the more embarrassing possibilities lower on the table are left to the Other’s conscience, as their own responsibility to handle.
1See “Science as Attire” in Map and Territory.
2See Hal Finney, “Agreeing to Agree,” Overcoming Bias (blog), 2006, http://www.overcomingbias.com/2006/12/agreeing_to_agr.html.
- The Least Convenient Possible World by 14 Mar 2009 2:11 UTC; 301 points) (
- What Are You Tracking In Your Head? by 28 Jun 2022 19:30 UTC; 279 points) (
- The Plan − 2023 Version by 29 Dec 2023 23:34 UTC; 146 points) (
- Leading The Parade by 31 Jan 2024 22:39 UTC; 146 points) (
- Where to Draw the Boundaries? by 13 Apr 2019 21:34 UTC; 124 points) (
- Polyhacking by 28 Aug 2011 8:35 UTC; 121 points) (
- The problem with too many rational memes by 19 Jan 2012 0:56 UTC; 121 points) (
- Logical Rudeness by 29 Jan 2010 6:48 UTC; 107 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- Your Price for Joining by 26 Mar 2009 7:16 UTC; 87 points) (
- You are probably not a good alignment researcher, and other blatant lies by 2 Feb 2023 13:55 UTC; 83 points) (
- Don’t Apply the Principle of Charity to Yourself by 19 Nov 2011 19:26 UTC; 82 points) (
- Bayesian Adjustment Does Not Defeat Existential Risk Charity by 17 Mar 2013 8:50 UTC; 81 points) (
- Optimal Employment by 31 Jan 2011 12:50 UTC; 77 points) (
- “Status” can be corrosive; here’s how I handle it by 24 Jan 2023 1:25 UTC; 71 points) (
- What (standalone) LessWrong posts would you recommend to most EA community members? by 9 Feb 2022 0:31 UTC; 67 points) (EA Forum;
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- The True Rejection Challenge by 27 Jun 2011 7:18 UTC; 60 points) (
- The Heckler’s Veto Is Also Subject to the Unilateralist’s Curse by 9 Mar 2020 8:11 UTC; 48 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- Alan Carter on the Complexity of Value by 10 May 2012 7:23 UTC; 47 points) (
- 24 Dec 2019 9:41 UTC; 46 points) 's comment on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk by (
- 16 Sep 2023 15:31 UTC; 44 points) 's comment on AI Pause Will Likely Backfire by (EA Forum;
- Choosing the right dish by 19 Nov 2022 1:38 UTC; 38 points) (
- 19 Feb 2014 18:08 UTC; 36 points) 's comment on A Fervent Defense of Frequentist Statistics by (
- 10 Aug 2019 6:30 UTC; 33 points) 's comment on “Rationalizing” and “Sitting Bolt Upright in Alarm.” by (
- 8 Dec 2019 14:54 UTC; 32 points) 's comment on EA Leaders Forum: Survey on EA priorities (data and analysis) by (EA Forum;
- “I know I’m biased, but...” by 10 May 2011 20:03 UTC; 32 points) (
- Disadvantages of Card Rebalancing by 6 Jan 2019 23:30 UTC; 32 points) (
- MIRI Conversations: Technology Forecasting & Gradualism (Distillation) by 13 Jul 2022 15:55 UTC; 31 points) (
- Preference For (Many) Future Worlds by 15 Jul 2011 23:31 UTC; 31 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- The Problematic Third Person Perspective by 5 Oct 2017 20:44 UTC; 28 points) (
- MIRI Conversations: Technology Forecasting & Gradualism (Distillation) by 13 Jul 2022 10:45 UTC; 27 points) (EA Forum;
- Comparative advantage in the talent market by 11 Apr 2018 23:48 UTC; 27 points) (EA Forum;
- 25 Nov 2012 23:12 UTC; 27 points) 's comment on LW Women- Minimizing the Inferential Distance by (
- If I showed the EQ-SQ theory’s findings to be due to measurement bias, would anyone change their minds about it? by 29 Jul 2023 19:38 UTC; 23 points) (
- 6 Sep 2011 17:21 UTC; 23 points) 's comment on Open Thread: September 2011 by (
- “Status” can be corrosive; here’s how I handle it by 24 Jan 2023 1:25 UTC; 22 points) (EA Forum;
- 1 Oct 2012 17:44 UTC; 22 points) 's comment on Female Test Subject—Convince Me To Get Cryo by (
- 27 Jul 2012 3:25 UTC; 21 points) 's comment on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set by (
- 26 Feb 2014 10:13 UTC; 21 points) 's comment on Open Thread February 25 - March 3 by (
- True Rejection Challenges by 5 Jun 2023 22:17 UTC; 20 points) (
- Lighthaven Sequences Reading Group #3 (Tuesday 09/24) by 22 Sep 2024 2:24 UTC; 20 points) (
- Notes from Online Optimal Philanthropy Meetup: 12-10-09 by 13 Oct 2012 5:36 UTC; 19 points) (
- 18 Apr 2012 22:50 UTC; 19 points) 's comment on How can we get more and better LW contrarians? by (
- 11 Aug 2012 3:23 UTC; 18 points) 's comment on [Link] Admitting to Bias by (
- Finding Cruxes by 20 Sep 2019 23:54 UTC; 18 points) (
- What are some exercises for building/generating intuitions about key disagreements in AI alignment? by 16 Mar 2020 7:41 UTC; 18 points) (
- The Trolley Problem: Dodging moral questions by 5 Dec 2010 4:58 UTC; 17 points) (
- 8 Dec 2010 9:07 UTC; 16 points) 's comment on Bridging Inferential Gaps by (
- 3 Mar 2012 22:37 UTC; 15 points) 's comment on How to Fix Science by (
- 1 Mar 2010 22:48 UTC; 14 points) 's comment on Open Thread: March 2010 by (
- 8 Jul 2010 17:12 UTC; 14 points) 's comment on Open Thread: July 2010 by (
- Experiment: Psychoanalyze Me by 17 Jul 2011 7:54 UTC; 14 points) (
- What are the top 1-10 posts / sequences / articles / etc. that you’ve found most useful for yourself for becoming “less wrong”? by 27 Mar 2022 0:37 UTC; 14 points) (
- [Help]: Social cost of cryonics? by 11 Sep 2011 19:26 UTC; 14 points) (
- 11 Nov 2018 12:49 UTC; 13 points) 's comment on Conversational Cultures: Combat vs Nurture (V2) by (
- 21 Jul 2011 16:25 UTC; 13 points) 's comment on Normal Cryonics by (
- 28 Jul 2013 20:50 UTC; 13 points) 's comment on Arguments Against Speciesism by (
- 14 Feb 2021 11:29 UTC; 12 points) 's comment on Complex cluelessness as credal fragility by (EA Forum;
- True Sources of Disagreement by 8 Dec 2008 15:51 UTC; 12 points) (
- 11 Jul 2012 19:02 UTC; 12 points) 's comment on Reply to Holden on The Singularity Institute by (
- 6 Dec 2010 4:53 UTC; 12 points) 's comment on Cheat codes by (
- 1 Nov 2012 6:12 UTC; 11 points) 's comment on [POLL] AI-FOOM Debate in Sequence Reruns? by (
- 4 Dec 2010 6:56 UTC; 11 points) 's comment on Are stereotypes ever irrational? by (
- 18 Jan 2012 13:18 UTC; 11 points) 's comment on The problem with too many rational memes by (
- Calibrating Against Undetectable Utilons and Goal Changing Events (part2and1) by 22 Feb 2013 1:09 UTC; 10 points) (
- 20 Mar 2011 3:42 UTC; 10 points) 's comment on A Rationalist’s Account of Objectification? by (
- 20 Jun 2013 13:30 UTC; 10 points) 's comment on Open Thread, June 16-30, 2013 by (
- 21 Nov 2020 19:25 UTC; 10 points) 's comment on AGI Predictions by (
- 28 Aug 2012 8:11 UTC; 10 points) 's comment on Stupid Questions Open Thread Round 4 by (
- 10 Dec 2011 19:12 UTC; 10 points) 's comment on Video Q&A with Singularity Institute Executive Director by (
- 30 Aug 2016 5:01 UTC; 10 points) 's comment on Open Thread, Aug 29. - Sept 5. 2016 by (
- 21 Jun 2013 14:01 UTC; 9 points) 's comment on What makes you different from Tim Ferriss? by (
- 20 Jan 2010 1:47 UTC; 9 points) 's comment on Normal Cryonics by (
- 4 Nov 2010 20:48 UTC; 8 points) 's comment on Advice for AI makers by (
- 25 Jun 2011 12:06 UTC; 8 points) 's comment on Advice for AI makers by (
- 3 Nov 2011 22:17 UTC; 8 points) 's comment on Rationality Quotes November 2011 by (
- 27 Nov 2011 20:12 UTC; 7 points) 's comment on Article idea: Good argumentation by (
- 3 Aug 2012 5:28 UTC; 7 points) 's comment on Reply to Holden on The Singularity Institute by (
- 6 Jul 2009 16:46 UTC; 7 points) 's comment on The Dangers of Partial Knowledge of the Way: Failing in School by (
- 9 Dec 2012 10:48 UTC; 7 points) 's comment on What information has surprised you most recently? by (
- 1 Dec 2010 15:41 UTC; 7 points) 's comment on Defecting by Accident—A Flaw Common to Analytical People by (
- 2 Jun 2012 16:41 UTC; 7 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 by (
- 18 Mar 2010 20:15 UTC; 7 points) 's comment on Let There Be Light by (
- 28 Nov 2012 2:34 UTC; 7 points) 's comment on LW Women- Minimizing the Inferential Distance by (
- Rationality Reading Group: Part G: Against Rationalization by 12 Aug 2015 22:09 UTC; 7 points) (
- 4 Sep 2022 10:48 UTC; 6 points) 's comment on Criticism of EA Criticisms: Is the real disagreement about cause prio? by (EA Forum;
- 23 Jun 2023 13:14 UTC; 6 points) 's comment on AI #17: The Litany by (
- 6 Dec 2010 9:01 UTC; 6 points) 's comment on The Truth about Scotsmen, or: Dissolving Fallacies by (
- 10 Feb 2015 9:01 UTC; 6 points) 's comment on Request for proposals for Musk/FLI grants by (
- 10 Nov 2011 2:25 UTC; 6 points) 's comment on Q&A with new Executive Director of Singularity Institute by (
- 22 Aug 2012 10:39 UTC; 6 points) 's comment on [Link] Social interventions gone wrong by (
- 14 Dec 2010 3:07 UTC; 6 points) 's comment on What topics would you like to see more of on LessWrong? by (
- 21 Aug 2013 2:54 UTC; 6 points) 's comment on Why Are Individual IQ Differences OK? by (
- 30 Nov 2023 15:19 UTC; 6 points) 's comment on Symbol/Referent Confusions in Language Model Alignment Experiments by (
- 24 Feb 2011 16:39 UTC; 5 points) 's comment on post proposal: Attraction and Seduction for Heterosexual Male Rationalists by (
- 2 Feb 2010 12:31 UTC; 5 points) 's comment on Open Thread: February 2010 by (
- 5 Mar 2011 18:57 UTC; 5 points) 's comment on A Brief Overview of Machine Ethics by (
- 23 Feb 2015 14:41 UTC; 5 points) 's comment on How to debate when authority is questioned, but really not needed? by (
- 28 May 2010 6:46 UTC; 5 points) 's comment on Abnormal Cryonics by (
- 22 Mar 2012 22:56 UTC; 5 points) 's comment on 6 Tips for Productive Arguments by (
- Daniel Kokotajlo’s Shortform by 8 Oct 2019 18:53 UTC; 5 points) (
- 5 Jun 2020 20:53 UTC; 5 points) 's comment on Status-Regulating Emotions by (
- 30 Jun 2012 10:12 UTC; 5 points) 's comment on Open Thread, June 16-30, 2012 by (
- 4 Feb 2012 3:48 UTC; 5 points) 's comment on One last roll of the dice by (
- 4 May 2009 9:18 UTC; 5 points) 's comment on The mind-killer by (
- 14 Mar 2011 2:51 UTC; 5 points) 's comment on Organ donation versus cryonics by (
- 1 Dec 2019 21:32 UTC; 4 points) 's comment on EA Leaders Forum: Survey on EA priorities (data and analysis) by (EA Forum;
- 20 Jan 2012 19:25 UTC; 4 points) 's comment on The Singularity Institute’s Arrogance Problem by (
- [SEQ RERUN] Is That Your True Rejection? by 14 Dec 2012 4:03 UTC; 4 points) (
- 14 May 2009 19:15 UTC; 4 points) 's comment on Survey Results by (
- 21 Nov 2020 14:15 UTC; 4 points) 's comment on Daniel Kokotajlo’s Shortform by (
- 20 Jun 2012 20:03 UTC; 4 points) 's comment on Why Academic Papers Are A Terrible Discussion Forum by (
- 18 Nov 2024 2:00 UTC; 4 points) 's comment on Heresies in the Shadow of the Sequences by (
- 4 Dec 2014 17:05 UTC; 4 points) 's comment on You have a set amount of “weirdness points”. Spend them wisely. by (
- 14 Feb 2021 12:18 UTC; 3 points) 's comment on Complex cluelessness as credal fragility by (EA Forum;
- 8 Jan 2021 2:51 UTC; 3 points) 's comment on 5 general voting pathologies: lesser names of Moloch by (
- 25 Dec 2012 20:42 UTC; 3 points) 's comment on New censorship: against hypothetical violence against identifiable people by (
- 26 Aug 2010 0:08 UTC; 3 points) 's comment on The Importance of Self-Doubt by (
- 2 Sep 2012 5:46 UTC; 3 points) 's comment on Open Thread, September 1-15, 2012 by (
- 11 Nov 2011 14:36 UTC; 3 points) 's comment on Amanda Knox: post mortem by (
- 31 Jan 2009 15:55 UTC; 3 points) 's comment on War and/or Peace (2/8) by (
- 7 Jan 2012 9:40 UTC; 3 points) 's comment on An argument that animals don’t really suffer by (
- 17 Oct 2023 19:50 UTC; 3 points) 's comment on Will no one rid me of this turbulent pest? by (
- 16 Jul 2010 1:12 UTC; 3 points) 's comment on Room for rent in North Berkeley house by (
- 2 Sep 2015 7:29 UTC; 3 points) 's comment on Open Thread August 31 - September 6 by (
- 12 Dec 2012 16:55 UTC; 3 points) 's comment on Claiming Connotations by (
- 20 Mar 2023 16:03 UTC; 3 points) 's comment on Against Deep Ideas by (
- 17 Dec 2021 19:59 UTC; 3 points) 's comment on What Caplan’s “Missing Mood” Heuristic Is Really For by (
- 27 Mar 2013 6:50 UTC; 2 points) 's comment on Don’t Get Offended by (
- 3 Sep 2012 22:02 UTC; 2 points) 's comment on Call For Agreement: Should LessWrong have better protection against cultural collapse? by (
- 26 Nov 2010 22:06 UTC; 2 points) 's comment on $100 for the best article on efficient charity—deadline Wednesday 1st December by (
- 15 May 2012 0:06 UTC; 2 points) 's comment on How many people here agree with Holden? [Actually, who agrees with Holden?] by (
- 28 Dec 2012 0:24 UTC; 2 points) 's comment on New censorship: against hypothetical violence against identifiable people by (
- 17 Dec 2017 9:59 UTC; 2 points) 's comment on Pascal’s Muggle Pays by (
- 14 Nov 2011 17:58 UTC; 2 points) 's comment on Amanda Knox: post mortem by (
- 12 Jun 2023 17:26 UTC; 2 points) 's comment on True Rejection Challenges by (
- 5 Jan 2013 2:28 UTC; 2 points) 's comment on Second-Order Logic: The Controversy by (
- 14 Oct 2011 19:07 UTC; 2 points) 's comment on Life is Good, More Life is Better by (
- 16 Mar 2015 14:15 UTC; 2 points) 's comment on Open thread, Mar. 2 - Mar. 8, 2015 by (
- 31 Oct 2012 16:31 UTC; 2 points) 's comment on Incentives to Make Money More Effectively, Should We List Them? by (
- 1 Feb 2011 0:25 UTC; 2 points) 's comment on Optimal Employment by (
- 23 Jun 2013 3:38 UTC; 2 points) 's comment on Initial Thoughts on Personally Finding a High-Impact Career by (
- 29 Dec 2010 17:40 UTC; 2 points) 's comment on Two questions about CEV that worry me by (
- 1 Aug 2012 1:56 UTC; 1 point) 's comment on Advice please: Cognitive distortion preventing me from accomplishing anything by (
- 26 Jul 2017 9:30 UTC; 1 point) 's comment on Ignorant, irrelevant, and inscrutable (rationalism critiques) by (
- 23 Apr 2017 1:06 UTC; 1 point) 's comment on Open thread, Mar. 20 - Mar. 26, 2017 by (
- 29 Mar 2012 0:13 UTC; 1 point) 's comment on I’ve had it with those dark rumours about our culture rigorously suppressing opinions by (
- 2 Aug 2009 20:46 UTC; 1 point) 's comment on Pain by (
- 20 Jan 2010 1:40 UTC; 1 point) 's comment on Normal Cryonics by (
- 19 Mar 2012 3:54 UTC; 1 point) 's comment on The Least Convenient Possible World by (
- 4 Jan 2012 3:26 UTC; 1 point) 's comment on Meta analysis of Writing Therapy by (
- 5 Apr 2011 22:13 UTC; 1 point) 's comment on Inverse Speed by (
- 2 Jan 2012 3:51 UTC; 1 point) 's comment on January 1-14, 2012 Open Thread by (
- 9 Aug 2012 20:59 UTC; 1 point) 's comment on Mentioning cryonics to a dying person by (
- 8 Mar 2012 0:52 UTC; 1 point) 's comment on Friendly AI Society by (
- 11 Mar 2024 14:25 UTC; 1 point) 's comment on Steelmanning as an especially insidious form of strawmanning by (
- 25 Sep 2015 23:32 UTC; 0 points) 's comment on Empathic communication and strategy for Effective Altruism, Part 1 by (EA Forum;
- 29 Apr 2010 12:18 UTC; 0 points) 's comment on Belief in Belief by (
- 8 Dec 2010 2:49 UTC; 0 points) 's comment on Suspended Animation Inc. accused of incompetence by (
- 5 Jun 2009 21:39 UTC; 0 points) 's comment on Mate selection for the men here by (
- 24 Dec 2016 22:31 UTC; 0 points) 's comment on Double Crux — A Strategy for Mutual Understanding by (
- 5 May 2009 10:13 UTC; 0 points) 's comment on Special Status Needs Special Support by (
- 19 Aug 2012 22:44 UTC; 0 points) 's comment on Rationality Quotes August 2012 by (
- 1 Mar 2013 16:02 UTC; 0 points) 's comment on Decision Theory FAQ by (
- 11 Feb 2010 16:24 UTC; -1 points) 's comment on Shut Up and Divide? by (
- 30 Mar 2013 23:40 UTC; -2 points) 's comment on The cup-holder paradox by (
- 11 Aug 2009 17:52 UTC; -2 points) 's comment on The usefulness of correlations by (
- 25 Apr 2017 18:34 UTC; -4 points) 's comment on Open thread, Mar. 20 - Mar. 26, 2017 by (
- The Allais Paradox and the Dilemma of Utility vs. Certainty by 4 Aug 2011 0:50 UTC; -5 points) (
- Is Getting More Utilons Your True Acceptance?? by 5 Feb 2013 21:26 UTC; -5 points) (
- 3 Oct 2017 19:23 UTC; -9 points) 's comment on Slack by (
There need not be just one “true objection”; there can be many factors that together lead to an estimate. Whether you have a Ph.D., and whether folks with Ph.D. have reviewed your claims, and what they say, can certainly be relevant. Also remember that you should care lots more about the opinions of experts that could build on and endorse your work, than about average Joe opinions. Very few things ever convince average folks of anything unusual; target a narrower audience.
And more than one of them may be considered decisive by themselves.
I object, and am going to reply in a way that I’m pretty sure will get me lots of negative points, but I’m going to do so because I see that you’ve gotten lots of positive points:
Why are you trying to convince people? What makes you think anyone can be convinced of anything? Do you really want people to be able to convince you? Do you really think you can be convinced of anything? Trying to convince people is counter to trying to encourage unbiased thought.
The experts that make that false objection are not really capable of building on his work; especially not at the time of that objection, because they are obviously holding a belief that hinders their ability to think rationally. They certainly can endorse his work, but is that really desired if they are hindered by their beliefs? (it might be desired, I don’t actually know) What makes you think average Joe’s can’t/won’t build on or endorse his work? Why would that not be desired?
I understand you’re coming from a statistical perspective, but it seems to me to be based on a false premise.
Immediate association: pick-up artists know well that when a girl rejects you, she often doesn’t know the true reason and has to deceive herself. You could recruit some rationalists among PUAs. They wholeheartedly share your sentiment that “rational agents must WIN”, and have accumulated many cynical but useful insights about human mating behaviour.
Most transhumanist ideas fall under the category of “not even wrong.” Drexler’s Nanosystems is ignored because it’s a work of “speculative engineering” that doesn’t address any of the questions a chemist would pose (i.e., regarding synthesis). It’s a non-event. It shows that you can make fancy molecular structures under certain computational models. SI is similar. What do you expect a scientist to say about SI? Sure, they can’t disprove the notion, but there’s nothing for them to discuss either. The transhumanist community has a tendency to argue for its positions along the lines of “you can’t prove this isn’t possible” which is completely uninteresting from a practical viewpoint.
If I was going to depack “you should get a PhD” I’d say the intention is along the lines of: you should attempt to tackle something tractable before you start speculating on Big Ideas. If you had a PhD, maybe you’d be more cautious. If you had a PhD, maybe you’d be able to step outside the incestuous milieu of pop sci musings you find yourself trapped in. There’s two things you get from a formal education: one is broad, you’re exposed to a variety of subject matter that you’re unlikely to encounter as an autodidact; the other is specific, you’re forced to focus on problems you’d likely dismiss as trivial as an autodidact. Both offer strong correctives to preconceptions.
As for why people are less likely to express the same concern when the topic is rationality; there’s a long tradition of disrespect for formal education when it comes to dispensing advice. Your discussions of rationality usually have the format of sage advice rather than scientific analysis. Nobody cares if Dr. Phil is a real doctor.
“There’s two things you get from a formal education: one is broad, you’re exposed to a variety of subject matter that you’re unlikely to encounter as an autodidact;”
As someone who has a Ph.D., I have to disagree here. Most of my own breadth of knowledge has come from pursuing topics on my own initiative outside of the classroom, simply because they interested me or because they seemed likely to help me solve some problem I was working on. In fact, as a grad student, most of the things I needed to learn weren’t being taught in any of the classes available to me.
The choice isn’t between being an autodidact or getting a Ph.D.; I don’t think you can really earn the latter unless you have the skills of the former.
But being a grad student gave you the need to learn them.
Or a common factor caused both.
That sounds like it’s less “Once you get a Ph.D., I’ll believe you,” than “Once you get a Ph.D., you’ll stop believing that.”
Of course, those aren’t so different: if I expect that getting a Ph. D would make one less likely to believe X, then believing X after getting a Ph.D is a stronger signal than simply believing X.
Which transhumanist ideas are “not even wrong”?
And do you mean simply ‘not well specified enough’? Or more like ‘unfalsifiable’?
You also seem to be implying that scientists cannot discuss topics outside of their field, or even outside its current reach.
My philosophy on language is that people can generally discuss anything. For any words that we have heard (and indeed, many we haven’t), we have some clues as to their meaning, e.g. based on the context in which they’ve been used and similarity to other words.
Also, would you consider being cautious an inherently good thing?
Finally, from my experience as a Masters student in AI, many people are happy to give opinions on transhumanism, it’s just that many of those opinions are negative.
“Which transhumanist ideas are “not even wrong”?”
Technological Singularity, for example (as defined in Wikipedia). In my view, it is just atheistic version of Rapture or The End Of World As We Know It endemic in various cults and equally likely.
Reason for that is that recursive self-improvement is not possible, since it requires perfect self-knowledge and self-understanding. In reality, AI will be black box to itself, like our brains are black boxes to ourself.
More precisely, my claim is that any mind on any level of complexity is insuficient to understand itself. It is possible for more advanced mind to understand simpler mind, but it obviously does not help very much in context of direct self-improvement.
AI with any self-preservation instincts would be as likely to willingly preform direct self-modification to its mind as you to get stabbed by icepick through eyesocket.
So any AI improvement would have to be done old way. Slow way. No fast takeoff. No intelligence explosion. No Singularity.
Our brains are mysterious to us not simply because they’re our brains and no one can fully understand themselves, but because our brains are the result of millions of years of evolutionary kludges and because they’re made out of hard-to-probe meat. We are baffled by chimpanzee brains or even rabbit brains in many of the same ways as we’re baffled by human brains.
Imagine an intelligent agent whose thinking machinery is designed differently from ours. It’s cleanly and explicitly divided into modules. It comes with source code and comments and documentation and even, in some cases, correctness proofs. Maybe there are some mysterious black boxes; they come with labels saying “Mysterious Black Box #115. Neural network trained to do X. Empirically appears to do X reliably. Other components assume only that it does X within such-and-such parameters.”. Its hardware is made out of (notionally) discrete components with precise specifications, and comes with some analysis to show that if the low-level components meet the spec then the overall function of the hardware should be as documented.
Suppose that’s your brain. You might, I guess, be reluctant to experiment on it in any way in place, but you might feel quite comfortable changing EXPLICIT_FACT_STORAGE_SIZE from 4GB to 8GB, or reimplementing the hardware on a new semiconductor substrate you’ve designed that lets every component run at twice the speed while remaining within the appropriately-scaled specifications, and making a new instance. If it causes disaster, you can probably tell; if not, you’ve got a New Smarter You up and running.
Of course, maybe you couldn’t tell if some such change caused disasters of a sufficiently subtle kind. That’s a reasonable concern. But this isn’t an ice-pick-through-the-eye-socket sort of concern, and it isn’t the sort of concern that makes it obvious that “recursive self-improvement is not possible”.
While I agree with the overall thrust of your comment, this brought to mind an old anecdote...
Such things are why I said “maybe you couldn’t tell if some such change caused disasters of a sufficiently subtle kind”.
Vladimir, I don’t quite think that’s the “narrower audience” Robin is talking about...
Robin, see the Post Scriptum. I would be willing to get a PhD thesis if it went by the old rules and the old meaning of “Prove you can make an original, significant contribution to human knowledge and that you’ve mastered an existing field”, rather than, “This credential shows you have spent X number of years in a building.” (This particular theory would be hard enough to write up that I may not get around to it if a PhD credential isn’t at stake.)
Eliezer,
See poke’s comment above (which is so on the nose, it actually inspired me to register). Then consider the following.
You will never get a PhD in the manner you propose, because that would fulfill only a part of the purpose of a PhD. The number of years spent in the building can be (and in too many cases is) wasted time—but if things are done in a proper manner, this time (which can be only three or four years) is critical.
For science PhDs specifically, the idea isn’t to just come up with something novel and write it up. The idea is to go into the field with a question that you don’t have an answer for, not yet. To find ways to collect data, and then to actually collect it. To build intricate, detailed models that answer your question precisely and completely, fitting all the available data. To design experiments specifically so you can test your models. And finally, to watch these models completely and utterly fail, nine times out of ten.
They won’t fail because you missed something while building them. They will fail because you could only test them properly after making them. If you just built the model that fit everything, and then never tested it with specific experiments… you could spend a very long time convinced that you have found the truth. Several iterations of this process makes people far less willing to extrapolate beyond the available data—certainly not nearly as wildly and as far as transhumanists do.
A good philosophy PhD can do the same, but it is much more difficult to get an optimal result.
Don’t take this the wrong way. I respect and admire your achievements, and I think getting a PhD would be a waste of time for you at this point. But it is entirely true that getting one—a real one—would increase the acceptance of your thoughts and ideas. Not (just) because a PhD would grant you prestige, but because your thoughts and ideas would actually get better.
Which finally brings us to the reason for the dichotomy you noted in your post. Your rationality musings are accepted because a) they are inspiring, and b) they can be actionable and provide solid feedback. A person can read them, try the ideas out, and see if those ideas work for them. Transhumanism, alas, falls under “half-baked” category; and the willingness to follow wildly speculative tangents from poorly constrained models… well, in order to have any weight there, you better either show concrete, practical results… or have credentials that show you have experienced significant model failure in the past. Repeatedly. And painfully. With significant cost to yourself.
As a current grad student myself, I could not disagree with poke’s comment and this comment more. I work for a very respected adviser in computer vision from a very prestigious university. The reason I was accepted to this lab is because I am an NDSEG fellow. Many other qualified people lost out because my attendance here frees up a lot of my adviser’s money for more students. In the mean time, I have a lot of pretty worthwhile ideas in physical vision and theories of semantic visual representations. However, I spend most of my days building Python GUI widgets for a group of collaborating sociologists. They collect really mundane data by annotating videos and no off the shelf stuff does quite what they want… so guess who gets to do that grunt work for a summer? Grad students.
You should really read the good Economist article The Disposable Academic. Graduate studentships are business acquisitions in all but the utmost theoretical fields. Advisers want the most non-linear things imaginable. For example, I am a pure math guy, with heavy emphasis on machine learning methods and probability theory. Yet my day job is seriously creative-energy-draining Python programming. The programming isn’t even related to original math, it’s just a novel thing for some sociologists to use. My adviser doesn’t want me to split my time between reading background theory, etc. He wants me to develop this code because it makes him look good in front of collaborators.
Academia is mostly a bad bad place. I think Eliezer’s desire to circumvent all the crap of grad school is totally right. The old way was a real, true apprenticeship. It isn’t like that anymore. Engineering is especially notorious for this. Minimize the amount of tenured positions, and balloon the number of grad students in order to farm out the work that profs don’t want to do. For almost all of these people, they will just go through the motions, do mundane bullshit, and write a thesis not really worth the paper it gets printed on. The few who don’t follow this route usually just take it upon themselves to go and read on their own and become experts across many different disciplines and then make interconnections between previously independent fields or results. Eliezer has certainly done this with discussions of Newcombe-like problems and friendly A.I. from a philosophical perspective. He’s done more honest academic work here than almost anyone I know in academia.
When I used to work at MIT Lincoln Laboratory, a colleague of mine had a great saying about grad school: “Grad school is 99% about putting your ass in the chair.” It is indeed about spending X years in building Y and getting Z publications. Pure mathematics is somewhat of an exception at elite schools. To boot, people don’t take you any more seriously when you finish. It continues on as a political/social process where you must win grants and provide widgets for collaborators and funding agencies.
There is much more to be said, and of course, I am a grad student, so I must feel that at least for myself it is a good decision despite all of the issues. Well, that’s not quite true. Part of it is that as an undergrad, all of my professors just paid attention to the fact that I was energetic and attentive when they talked about topics they liked, and it created a cumulative jazzed-up feeling for those topics. I expected grad school to be very different than it really is. I should also add that I have been in two different PhD programs, both Ivy League (I can talk more specifically in private). I transferred because the adviser situation at the first school was pretty grim. There was only one faculty member doing things close to what I wanted to study, and he was such a famous name that he had not the time of day for me. For example, I once scheduled a meeting with him to discuss possible research and when I arrived he let me know it was going to be a jointly held meeting with his current doctoral student. While that student wrote on a chalkboard, I got to ask questions. When the student was finished, this prof addressed the student for various intervals of time and then came back to me. This sort of pedantic garbage is the rule rather than the exception.
I find myself having to constantly fight the urge go home from a long, wrist-achingly terrible day of mundane Python programming and just mentally check out. Instead, I read about stuff here on LW, or I read physics books, or now A.I. complexity books. Hopefully my thesis will be a contribution that I enjoy and find interesting. Even better if it helps move science along in a “meaningful way”, but the standard PhD process is absolutely not going to let that happen unless I actively intervene and do things all on my own.
Anyone considering a PhD should consider this heavily. My experience is that it is nothing like the description above or poke’s comment. I think Bostrom should advise a thesis with Eliezer because it would be a great addition to philosophy, and I don’t want Eliezer burdened with nuisance coursework requirements. We should be unyoking uncommonly smart people when we find them, not forcing them to jump through extra hoops just for the pedantic sake of standardization.
Ok, so—I hear what you’re saying, but a) that is not the way it’s supposed to be, and b) you are missing the point.
First, a), even in the current academia, you are in a bad position. If I were you, I would switch mentors or programs ASAP.
I understand where you’re coming from perfectly. I had a very similar experience: I spent three years in a failed PhD (the lab I was working in went under at the same time as the department I was in), and I ended up getting a MS instead. But even in that position, which was all tedious gruntwork, I understood the hypothesis and had some input. I switched to a different field, and a different mentor, where most of my work was still tedious, but it was driven by ideas I came to while working with my adviser.
If your position is, as it seems to be, even worse—that you have NO input whatsoever, and are purely cheap labor—then you should switch mentors immediately. If you don’t, you might finish your PhD with a great deal of bitterness, but it is much more likely that you will simply burn out and drop out.
Which brings me to b). As I said above, it would be pointless for Eliezer to go to grad school now. Even at best, it contains a lot of tedious, repetitive work. But the essential point stands: in a poorly constrained area such as transhumanism, grand ideas are not enough. That is where PhD does have a function, and does have a reason.
Actually, my mentor is among one of the nicest guys around and is a good manager, offers good advice, and has a consistent record of producing successful students. It’s just that almost no grad student gets to have real input in what they are doing. If you do have that, consider yourself lucky, because the dozens of grad students that I know aren’t in a position like that. I just had a meeting today where my adviser talked to me about having to balance my time between “whatever needs doing” (for the utility of our whole research group rather than just my own dissertation) and doing my own reading/research. His idea (shared by many faculty members) is that for a few years at the front end of the PhD, you mostly do about 80% general utility work and infrastructure work, just to build experience, write code, get involved… then after you get into some publications a few years later, the roles switch and you shift to more like 80% writing and doing your own thing (research). The problem is that if you’re a passionate student with good ideas, then that first few years of bullshit infrastructure work is a complete waste of time. The run-of-the-mill PhD student (who generally is not really all that smart or hardworking) might do fine just being told what to program for a few years, but the really intellectually curious people will die inside. Plus, for those really smart PhD students out there, I think it’s in my best interest that they be cleared of all nuisance responsibilities and allowed to just work.
As for point (b), my reasoning would be like this: “in a poorly constrained area such as transhumanism, grand ideas are not enough, and that’s exactly why a PhD is irrelevant and I have no reason to think that someone has better or more useful or more important ideas than Eliezer just because that person has been published in peer reviewed journals.”
http://www.caseyresearch.com/cdd/doug-casey-education
Robin: Of course a PhD in “The Voodoo Sciences” isn’t going to help convince anybody competent of much. I am actually more impressed with some of the fiction I vaguely remember you writing for Pournelle’s “Endless Frontier” collections than a lot of what I’ve read recently here.
Poke: “formal education: one is broad, you’re exposed to a variety of subject matter that you’re unlikely to encounter as an autodidact”
I used to spend a lot of time around the Engineering Library at the University of Maryland, College Park before I moved away. In more than ten years I have never met anyone there as widely read as myself. This also brings to mind a quote from G Harry Stine’s “The Hopeful Future” about how the self-taught are usually deficient in some areas of their learning—maybe so, but the self-taught will simply extend their knowledge when a lack appears to them. My lacks are from a lack of focus not breadth. Everybody I have ever met at the University was the other way around, narrow and all too often not even aware of how narrow they were.
Perhaps you are marginally ahead of your time Eliezer, and the young individuals that will flush out the theory are still traipsing about in diapers. In which case, either being a billionare or a phD makes it more likely you can become their mentor. I’d do the former if you have a choice.
If you started going to college and actually worked at it a bit you could have skipped to Ph.D. work if you wanted to I did. I skipped all of the B.S. and M.S. work straight to Ph.D. But if your math that you’ve posted is any sign of the state of your knowledge I don’t hold much hope of that happening, since you can’t seem to do basic derivatives correctly. When I started skipping classes for example skipping all of calculus and linear algebra to differential equations I had a partially finished a manuscript on solving differential equations that I have been working on for a while. Now the question that logically pops up is do I have Ph.D. now? No, I am taking a break from that to start a company or three if I can.
Can’t do basic derivatives? Seriously?!? I’m for kicking the troll out. His bragging about mediocre mathematical accomplishments isn’t informative or entertaining to us readers.
billswift wrote:
Yes, this point is key to the topic at hand, as well as to the problem of meaningful growth of any intelligent agent, regardless of its substrate and facility for (recursive) improvement. But in this particular forum, due to the particular biases which tend to predominate among those whose very nature tends to enforce relatively narrow (albeit deep) scope of interaction, the emphasis should be not on “will simply extend” but on “when a lack appears.”
In this forum, and others like it, we characteristically fail to distinguish between the relative ease of learning from the already abstracted explicit and latent regularities in our environment and the fundamentally hard (and increasingly harder) problem of extracting novelty of pragmatic value from an exponentially expanding space of possibilities.
Therein lies the problem—and the opportunity—of increasingly effective agency within an environment of even more rapidly increasing uncertainty. There never was or will be safety or certainty in any ultimate sense, from the point of view of any (necessarily subjective) agent. So let us each embrace this aspect of reality and strive, not for safety but for meaningful growth.
Eliezer, I’m sure if you complete your friendly AI design, there will be multiple honorary PhDs to follow.
Sorry about the length of the post, there was just a lot to say.
I believe disagreements are easier to unpack if we stop presuming they are about difference in belief. Posts like this seem to confirm my own experience that the strongest factor in convincing people of something is not any notion of truth or plausibility but whether there are common allegiances with the other side. This seems to explain a number of puzzles of disagreement, including: (list incomplete to save space)
Why do people who aren’t sure about Elizer’s posts about physics/comp science/biology etc wonder what famous names have to say? (hypothesis: you could create names by creating a person who reliably agreed with some subset of public speakers. you could stretch allegiances by stretching the subset after the followers are established)
Why is Ray Kurzweil more convincing than code and credentials? (hypothesis: PZ Myers’s support would be really convincing to Scienceblog readers)
Why does good grammar and spelling help convince? (hypothesis: with younger crowds and faster communication, poor spelling/grammar would help convince)
Why do some lies work better than others? (hypothesis: psychics are more likely to be believed when they agree with the person they are working)
Why do uninformed supporters of Barack Obama rationalize their reasons for supporting him? (hypothesis: X’s supporters would do it too, where X is a mainstream public figure, but would not do it if questioned by someone perceived to be on the Other Side)
When people say “Why don’t you have a PhD?” they have executed a search for a piece of evidence that, if someone had, would help convince them that he was on Their Side. However, (like someone who objects to Objectivism and reads Ayn Rand in response to the reaction he gets) even when he returns with a PhD, they still don’t wear his colors. In a conversation with a person on the side of science who has not yet heard of it, a creationist mentioning that there is a $250,000 prize for anyone who can give convincing proof of evolution meets an automatic (and in my case, curiously confused) skepticism not from knowing the specifics of the prize but rather a gut reaction “Kent Hovind is a creationist, he must be doing something wrong.”
The same thing applies with SIAI. It meets automatic skepticism and people want evidence that the organization’s ideas are on their side. Famous names worked where code and credentials don’t because the famous names share strong common allegiances. Code and credentials appeal to the “does it work?” mentality which would convince a lot of people allied to engineering and science if not for the fact that these are also signals used by impostors. The support of PZ Myers, on the other hand, would be an incredibly difficult signal to fake, making it a strong signal that would communicate a great deal of common allegiance with people who are scientifically minded and internet savvy. Same with a positive mention in the New York times for liberally minded people.
I have spent years in the Amazon Basin perfecting the art of run-on sentences and hubris it helps remind others of my shining intellect it also helps me find attractive women who love the smell of rich leather furnishings and old books.
Between bedding supermodels a new one each night, I have developed a scientific thesis that supersedes your talk of Solomonoff and Kolmogorov and any other Russian name you can throw at me. Here are a random snippet of conclusions a supposedly intelligent person will arrive having been graced by my mathematical superpowers:
Everything you thought you knew about Probability is wrong.
Existence is MADE of Existence.
Einstien didn’t know this, but slowly struggled toward my genius insight.
They mocked me when I called myself a modern day Galileo, but like Bean I will come back after they have gone soft.
I can off the tip of my rather distinguished salt-and-pepper beard name at least 108 other conclusions that would startle lesser minds such as the John BAEZ the very devil himself or Adolf Hitler I have really lost my patience with you ElIzer.
They called me mad when I reinvented calculus! They will call me mad no longer oh I have to go make the Sweaty Wildebeest with a delicately frowning Victoria’s Secret model.
time cube!
I wish I could upvote this post back into the positive.
(It seems pretty obvious to me that is a direct satire of the previous post by a similar username. What, no love for sarcasm?)
Poe’s Law, anyone?
I thought it was obvious.
Maybe I’m just spoiled by the consistently good comments on LessWrong, and don’t realize that there actually are comments that bad and cliched.
Given that particular misspelling of Einstein and the mention of Baez, it was nearly impossible that BrandNameThinker hadn’t heard of the crackpot index… But what’s obvious for me (and you) needn’t be obvious for someone else. (Or maybe the downvoters did get the joke but just didn’t find it funny.)
I had to look it up, but I definitely agree. Especially considering how quickly the karma changes reversed after I edited in that footnote.
Crap. Will the moderator delete posts like that one, which appear to be so off the Mark?
Eliezer - ‘I would be willing to get a PhD thesis if it went by the old rules and the old meaning of “Prove you can make an original, significant contribution to human knowledge and that you’ve mastered an existing field”, rather than, “This credential shows you have spent X number of years in a building.”’
British and Australasian universities don’t require any coursework for their PhDs, just the thesis. If you think your work is good enough, write to Alan Hajek at ANU and see if he’d be willing to give it a look.
Ignoring the highly unlikely slurs about your calculus ability:
However, if any professor out there wants to let me come in and just do a PhD in analytic philosophy—just write the thesis and defend it—then I have, for my own use, worked out a general and mathematically elegant theory of Newcomblike decision problems. I think it would make a fine PhD thesis, and it is ready to be written—if anyone has the power to let me do things the old-fashioned way.
British universities? That’s the traditional place to do that sort of thing. Oxbridge.
Specifically with regard to the apparent persistent disagreement between you and Robin, none of those things explain it. You guys could just take turns doing nothing but calling out your estimates on the issue in question (for example, the probability of a hard takeoff AI this century), and you should reach agreement within a few rounds. The actual reasoning behind your opinions has no bearing whatsoever on your ability to reach agreement (or more precisely, on your inability to maintain disagreement).
Now, this is assuming that you both are honest and rational, and view each other that way. Exposing your reasoning may give one or the other of you grounds to question these attributes, and that may explain a persistent disagreement.
It is also useful to discuss your reasoning, if case your goal is not to simply reach agreement, but to get the right answer. It is possible that this is the real explanation behind your apparent disagreement. You might be able to reach agreement relatively quickly by fiat, but one or both of you would still be left puzzled about how things could be so different from what your seemingly very valid reasoning led you to expect. You would still want to hash over the issues and talk things out.
Robin earlier posted, “since this topic is important, my focus here is on gaining a better understanding of it”. I read this as suggesting that his goal is not merely to resolve the disagreement, and perhaps not to particularly pursue the disagreement aspects at all. He also pointed out, “you do not know that I have not changed my opinion since learning of Eliezer’s opinion, and I do not assume that he has not changed his opinion.” This is consistent with the possibility that there is no disagreement at all, and that Robin and possibly Eliezer have changed their views enough that they substantially agree.
Robin has also argued that there is no reason for agreement to limit vigorous dissension and debate about competing views. In effect, people would act as devil’s advocates, advancing ideas and positions that they thought were probably wrong, but which still deserved a hearing. It’s possible that he has come to share Eliezer’s position yet continues to challenge it along the lines proposed in that posting.
One thing that bothers me, as an observer who is more interested in the nature of disagreement than the future course of humanity and its descendants, is that Robin and Eliezer have not taken more opportunity to clarify these matters and to lay out the time course of their disagreement more clearly. It would help too for them to periodically estimate how likely they think the other is to be behaving as a rational, honest, “Bayesian wannabe”. As two of the most notable wannabe’s around, both very familiar with the disagreement theorems, both highly intelligent and rational, this is a terrible missed opportunity. I understand that Robin’s goal may be as stated, to air the issues, but I don’t see why they can’t simultaneously serve the community by shedding light on the nature of this disagreement.
Mike
“Can’t do basic derivatives? Seriously?!? I’m for kicking the troll out. His bragging about mediocre mathematical accomplishments isn’t informative or entertaining to us readers.”
Did you look at his derivatives? “dy/dt = F(y) = Ay whose solution is y = e^(At)” How is e^(at) = dy/dt=ay Basic derivatives 101 d/dx e^x = e^x
“Solving
dy/dt = e^y
yields
y = -ln(C—t)”
again dy/dt=e^y does not equal -ln(c-t) unless e is not the irrational constant that it is normally even if that it is the case the solution is still wrong… again refer to a basic derivative table...
So I am a troll because I point out errors? Ok, fine then I am a troll and will never come back. Thats interesting so you must be a saint for thinking these errors are the truth.
I apologize that I am not amusing you, but I am not a court jester like yourself.
Mediocre accomplishments hmm… well did you skip all of your bachelors work straight to grad school in mathematics? I would bet not. Don’t talk of mediocrity unless you can prove yourself above that standard. So I believe your credentials would be needed to prove that or some of your own superior accomplishments if you have any? I await eagerly.
I believe that GenericThinker is interpreting ‘dy/dt = e^y’ as instructions to take the derivative of e^y (with respect to y or t I’m not sure). This is an extreme (and extremely arrogant) case of a general problem in lower-level and non-major math education; students learn to treat ‘=’ as a generic verb (although here it’s more of a preposition) instead of a symbol with a specific meaning. I work hard to beat this out of my algebra students, even though it usually won’t trip them up; but my calculus students (if they haven’t gotten it beaten out) do trip over it, much like GenericThinker is doing here.
And with that lovely exhibition of math talent, combined with the assertion that he skipped straight to grad school in mathematics, I do hereby request GenericThinker to cease and desist from further commenting on Overcoming Bias.
Generic,
The y appears on both sides of the equation, so these are differential equations. To avoid confusion, re-write as:
(1) (d/dt) F(t) = A*F(t) (2) (d/dt) F(t) = e^F(t)
Now plug e^At into (1) and -ln(C-t) into (2), and verify that they satisfy the condition.
You could recruit some rationalists among PUAs. They wholeheartedly share your sentiment that “rational agents must WIN”
Interesting. As a reasonable approximation, approaching women with confidence==one-boxing on Newcomb’s problem. Eliezer’s posts have increased my credence that the latter is correct, although it hasn’t helped me with the former.
@Brian
I think Alec Greven may be your man. Or perhaps like Lucy van Pelt I should set up office hours offering Love Advice, 5 cents?
You could recruit some rationalists among PUAs. They wholeheartedly share your sentiment that “rational agents must WIN”
You have. We do. And yes, they must.
“Drexler’s Nanosystems is ignored because it’s a work of “speculative engineering” that doesn’t address any of the questions a chemist would pose (i.e., regarding synthesis).”
It doesn’t address any of the questions a chemist would pose after reading Nanosystems.
“As a reasonable approximation, approaching women with confidence==one-boxing on Newcomb’s problem.”
Interesting. Although I would say “approaching women with confidence is an instance of a class of problems that Newcomb’s problem is supposed to represent but does not.” Newcomb’s problem presents you a situation in which the laws of causality are broken, and then asks you to reason out a solution assuming the laws of causality are not broken.
It does? Assuming a deterministic universe (which would seem to be necessary for Omega to predict with 100% certainty whether someone 1-boxes or 2-boxes) then it isn’t the act of taking one or two boxes that causes the $1M to be present or absent, but rather both those events share a common cause (namely whatever circumstances were present to cause the subject to pick their Newcombe strategy).
If Omega can’t predict with certainty then two-boxing is possibly the right answer, depending on exactly how good Omega is at “reading” people and how much money is potentially in the boxes. If this really happened to me, I’ve got no idea how I would decide. I would probably think some kind of trick was going on and after checking all the fine print for loopholes and one-box the million with some kind of witnesses around to grab the old man behind the curtain and have him charged with fraud if the money wasn’t there. If he had some legal escape hatch I could spot, then I’d probably one-box the $1k since I’d think it unlikely someone would give up $999,000 if they could get away with it.
Agree that picking up women isn’t newcombelike, though. I get how being nervous can be like 1-boxing the $1k, and being honestly confident might be like one-boxing the $1M (maybe?) but I have no idea what corresponds to 2-boxing.
Daniel, I knew it :-)
Phil, you can look at it another way: the commonality is that to win you have to make yourself believe a demonstrably false statement.
“However, if any professor out there wants to let me come in and just do a PhD in analytic philosophy—just write the thesis and defend it—then I have, for my own use, worked out a general and mathematically elegant theory of Newcomblike decision problems. I think it would make a fine PhD thesis, and it is ready to be written—if anyone has the power to let me do things the old-fashioned way.”
I think this is a good idea for you. But don’t be surprised if finding the right one takes more work than an occasional bleg. And I do recommend getting it at Harvard or the equivalent. And if I’m not mistaken, you may still have to do a bachelors and masters?
My university does not require a masters’ degree to get a PhD.
If I have to do a bachelors degree, I expect that I can pick up an accredited degree quickly at that university that lets you test out of everything (I think it’s called University of Phoenix these days?). No Masters, though, unless there’s an org that will let me test out of that.
The rule of thumb here is pretty simple: I’m happy to take tests, I’m not willing to sit in a building for two years solely in order to get a piece of paper which indicates primarily that I sat in a building for two years.
If you don’t have a bachelor’s degree, that makes it rather unlikely that you could get a PhD. I agree with folks that you shouldn’t bother—if you are right, you’ll get your honorary degrees and Nobel prizes, and if not, then not. (I know I am replying to a five-year-old comment).
I also think you are too quick to dismiss the point of getting these degrees, since you in fact have no experience in what that involves.
That’s what John Gilmore did, among other cool things. http://www.toad.com/gnu/ http://papersplease.org/id.html
John Gilmore writes about his cut-rate degree, here: http://reason.com/archives/2005/04/01/letters
I would really like it if EY had the ability (money, engineering team, etc.) to begin producing brain-like hardware, replete with mirror neurons. I think that’s the only way that the sociopaths (prosecutors, judges, politicians, bureaucrats, police) in the coercive sector will be challenged in a meaningful way. I think that it might be smarter for him to relocate MIRI to South Korea, because there’s more of a culture of robotics there, and robotics is necessary for feedback regarding real-world problems.
These desires of mine aren’t tyrannical. I wouldn’t cling to them or try to prescribe actions for EY if he didn’t share the same desires. I’m just stating what I would do, if I were suddenly to occupy a leadership position at MIRI, or some similarly-capable organization.
In many ways, this is a deep decision that is based on difficult to quantify innate value judgments. Hawkins was fascinated with brains, and logically, pursued brain design because attempts at getting intelligent, brainlike responses with technology have been so weak in the past, even given approximately adequate computing power. Deep-learning done by Schmidhuber has also recently been productive, given that all computational hardware is enabling far more intelligence, even from systems that were not optimal.
This leads me to believe there will be “many kinds of minds” in the coming singularity. Some, of course, will be superior to others in terms of ability to restructure their environments. Let’s hope they aren’t sociopathic, or “coercive-human-directed.” Remember, even intelligent people can act as sociopaths given good intentions, but the wrong (coercive collectivist) ideas.
if you know ahead of time that you’re going to be given this decision, either pre-commit to one-boxing, or try to game the superintelligence. Neither option is irrational; it doesn’t take any fancy math; one-boxing is positing that your committing to one-boxing has a direct causal effect on what will be in the boxes.
if you didn’t know ahead of time that you’d be given this decision, choose both boxes.
You can’t, once the boxes are on the ground, decide to one-box and think that’s going to change the past. That’s not the real world, and describing the problem in a way that makes it seem convincing that choosing to one-box actually CAN change the past, is just spinning fantasies.
This is one of a class of apparent paradoxes that arise only because people posit situations that can’t actually happen in our universe. Like the ultraviolet catastrophe, or being able to pick any point from a continuum.
Phil, your commitment ahead of time is your own private business, your own cognitive ritual. What you need in order to determine the past in the right way is that you are known to perform a certain action in the end. Whether you are arranging it so that you’ll perform that action by making a prior commitment and then having to choose the actions because of the penalty, or simply following a timeless decision theory, so that you don’t need to bother with prior commitments outside of your cognitive algorithm, is irrelevant. If you are known to follow timeless decision theory, it’s just as good as if you are known to have made a commitment. You could say that embracing timeless decision theory is a global meta-commitment, that makes you act as if you made commitment in all the situations where you benefit from having made the commitment.
For example, one-off prisoner’s dilemma can be resolved to mutual cooperation by both players making a commitment of huge negative utility in case the other player cooperates and you defect. This commitment doesn’t fire in the real world, since its presence makes both players cooperate. The presence of this commitment leads to a better outcome for both players, so it’s always rational to make it. Exactly the same effect can be achieved by both players following the timeless decision theory, only without the need to bother arranging that commitment in the environment (which could prove ruinous in case of advanced intelligence, since commitment is essentially intelligence playing adversarial game in the environment against itself).
Vladimir, I understand the PD and similar cases. I’m just saying that the Newcomb paradox is not actually a member of that class. Any agent faced with either version—being told ahead of time that they will face the Predictor, or being told only once the boxes are on the ground—has a simple choice to make; there’s no paradox and no PD-like situation. It’s a puzzle only if you believe that there really is backwards causality.
Phil, you said “if you didn’t know ahead of time that you’d be given this decision, choose both boxes”, which is a wrong answer. You didn’t know, but the predictor knew what you’ll do, and if you one-box, that is your property that predictor knew, and you’ll have your reward as a result.
The important part is what predictor knows about your action, not even what you yourself know about your action, and it doesn’t matter how you convince the predictor. If predictor just calculates your final action by physical simulation or whatnot, you don’t need anything else to convince it, you just need to make the right action. Commitment is a way of convincing, either yourself to make the necessary choice, or your opponent of the fact that you’ll make that choice. In our current real world, a person usually can’t just say “I promise”, without any expected penalty for lying, however implicit, and expect to be trusted, which makes Newcomb’s paradox counterintuitive, and which makes cooperating in one-off prisoner’s dilemma without pre-commitment unrealistic. But it’s a technical problem of communication, or of rationality, nothing more. If predictor can verify that you’ll one-box (after you understand the rules of the game, yadda yadda), your property of one-boxing is communicated, and it’s all it takes.
“You didn’t know, but the predictor knew what you’ll do, and if you one-box, that is your property that predictor knew, and you’ll have your reward as a result.”
No. That makes sense only if you believe that causality can work backwards. It can’t.
“If predictor can verify that you’ll one-box (after you understand the rules of the game, yadda yadda), your property of one-boxing is communicated, and it’s all it takes.”
Your property of one-boxing can’t be communicated backwards in time.
We could get bogged down in discussions of free will; I am assuming free will exists, since arguing about the choice to make doesn’t make sense unless free will exists. Maybe the Predictor is always right. Maybe, in this imaginary universe, rationalists are screwed. I don’t care; I don’t claim that rationality is always the best policy in alternate universes where causality doesn’t hold and 2+2=5.
What if I’ve decided I’m going to choose based on a coin flip? Is the Predictor still going to be right? (If you say “yes”, then I’m not going to argue with you anymore on this topic; because that would be arguing about how to apply rules that work in this universe in a different universe.)
Presumably the Predictor would be smart enough to calculate the result of that coin flip.
If it was an actually random bit, then I don’t know. In the real universe, as you require, then the Predictor would have a 50% certainty of being right. Probably if the Predictor thought you might do that it wouldn’t offer you the challenge, in order to maintain its reputation for omniscience.
Compare: communicating the property of the timer that it will ring one hour in the future (that is, timer works according to certain principles that result in it ringing in the future) vs. communicating from the future the fact that timer ringed. If you can run a precise physical simulation of a coin, you can predict how it’ll land. Usually, you can’t do that. Not every difficult-seeming prediction requires things like simulation of physical laws, abstractions can be very powerful as well.
Vladimir, I don’t mean to diss you; but I am running out of weekend, and think it’s better for me to not reply than to reply carelessly. I don’t think I can do much more than repeat myself anyway.
One boxing because of a lack of precommitment is a mistake. Backwards causality is irrelevant. Prediction based off psychological or physical simulation is sufficient.
Gaming a superintelligience with dice acheives little. You’re here to make money not prove him wrong. Expect him to either give you a probabilistic payoff or count a probabilistic decision as two boxing. Giving pedentic answers requires a more formal description, it doesn’t change anything.
If I’m ever stuck in a prison with a rational, competitive fellow prisoner, it’d be really damn handy to be omniscient and have my my buddy know it.
I may be wrong about Newcomb’s paradox.
It’s perplexing: This seems like a logic problem, and I expect to make progress on logic problems using logic. I would expect reading an explanation to be more helpful than having my subconscious mull over a logic problem. But instead, the first time I read it, I couldn’t understand it properly because I was not framing the problem properly. Only after I suddenly understood the problem better, without consciously thinking about it, was I able to go back, re-read this, and understand it.
I’m glad that helped.
I don’t think it did help, though. I think I failed to comprehend it. I didn’t file it away and think about it; I completely missed the point. Later, my subconscious somehow changed gears so that I was able to go back and comprehend it. But communication failed.
Buddhists say that great truths can’t be communicated; they have to be experienced, only after which you can understand the communication. This was something like that. Discouraging.
From my experience, the most productive way to solve a problem on which I’m stuck (that is, hours of looking at it produce no new insight or promising directions of future investigation), is to keep it in the background for long time, while avoiding forgetting it by recalling what’s it about and visualizing its different aspects and related conjectures from time to time. And sure enough, in a few days or weeks, triggered by some essentially unrelated cue, a little insight comes, that allows to develop a new line of thought. When there are several such problem in the background, it’s more or less efficient.
Inferential distance can make communication a problem worthy of this kind of reflectively intractable insight.
Phil—Changing your mind on previous public commitments is hard work. Respect!
It’s a fascinating problem. I’m hoping Eleizer gets a chance to write that thesis of his. It’s even more interesting once you see people applying newcomelike reasoning behaviorally. A whole lot more of human behavior started making sense after I grasped the newcome problem.
Phil, I think that’s how logic (or math) normally works. You make progress on logic problems by using logic, but understanding another’s solution usually feels completely different to me, completely binary.
Also, it’s hard to say that your unconscious wasn’t working on it. In particular, I don’t know if communicating logic to me is as binary as it feels, whether I go through a search of complete dead ends, or whether intermediate progress is made but not reported.
Going back to this post, a lot of things that puzzled us then are way more obvious now. But one angle remained unexplored for some reason. Here it is: if people catch on that you got a PhD just to persuade them, your PhD won’t help you persuade them. As Robin said, people often don’t have “true rejections” on the object level because they don’t understand the object level. Instead they feel (correctly) that controversial scientific arguments should not be sold directly to the public, and apply multiple heuristics on the meta level. And the positive heuristic “this guy has a PhD” can be beaten thoroughly by the negative heuristic “this guy got himself a PhD in order to sell me something”. Drexler’s PhD does much less to persuade me than an actual lab report demonstrating that some small part of the vision works.
So… study your field out of curiosity, build a community of curious researchers, make genuine progress, publish the results, and you will have the impact you wanted. Adding more trappings of respectability to your advocacy operation doesn’t work, and IMO it’s good that it doesn’t work.
Perhaps it should, but the problem is that answering this question is one of the big problems in salesmanship: working out the customer’s true obstacle to wanting to buy from you. Salesmen would love to be able to get a true answer to this question—and some even ask it directly—but people tend to receive this as manipulation: finding out their inner thoughts for purposes of getting their money. This feeling happens when selling someone on an idea as much as it does when selling an item for money.
Vladimir Stepney also notes that they may not know the answer themselves. (Salesmen are aware of this problem too.)
As such, asking the question directly—as you note in your “if”—may end up being counterproductive. If this question occurs to you, you’re already in sales mode. If you really want to make that sale, this is a question you have to divine the answer to yourself.
Here’s something I’d love to put into an entire article, but can’t because my karma’s bad (see my other comment on this thread):
Many people make the false assumption that the scientific method starts with the hypothesis. They think: first hypothesize, then observe, then make a theory from the collection of hypotheses.
The reality is quite the opposite. The first point on the scientific method is the observation. Any hypotheses before observation will only diminish the pool of possible observations. Second is building a theory. Along the process, many things are observed without using a hypothesis to observe them; only needing a question and a desire to explore for the answer. Last is using the theory to create hypotheses; which are then used to confirm, alter, or reject the theory.
The exploration phase is where interesting new observations come up for the first time, and everyone who nears the end of this phase wants to talk to people about it; because it’s really cool! Lots of amazing stuff is observed before any real theory can be made. Lots of plausible, valid, and important ideas come up which should indeed be discussed, as they are relevant to many subjects.
However, the people who take the wrong path, make the false assumption that the scientific method starts with the hypothesis, are afraid to take a step out of their reality. Taking in new information as an observation with no hypothesis or theory attached, which is what is available to them from the exploration stage, would be taking a step out of their reality.
the response you get of “why should I believe x? He doesn’t have a PhD” is fundamentally the same as the response “prove it. Until you prove it, why should I listen to you?”. These people don’t want to observe or theorize. They don’t want to explore. They don’t want to work toward the goal I want to work toward. They want to stay in their world until they have a reason to involve something new in their world. The result of that isn’t an improvement, either, because they still have the same problem of living in their own world.
This objection is a fallacy coming from false assumptions. The thing I have observed to be a/the source of this fallacy is the false start of the scientific method cycle.
FYI, “pseudo-scientists” have a related problem where they are either unable or unwilling to leave the exploration phase (often because the theory they are exploring is wrong, but sometimes because they’re just looking in the wrong direction). Religions have another related problem where they are unable or unwilling to enter the exploration phase; which is not entirely unlike the problem I described about normal science, which is why many people call science a religion.
This isn’t related to the entire post so much as it is a response to the problem with the scientific method. The scientific method does start with a hypothesis in relation to individual experiments. The hypothesis starts with general observations made from previous experiments or just some kind of general observation.
If we assume complete ignorance about NaCl (Sodium Chloride), but previously observed that pure sodium (Na) explodes in the presence of water, we might decide to devise an experiment to see what happens when we place other compounds with sodium in water. Our hypothesis might be something like, everything that has sodium explodes in water. It is not necessary to write down this hypothesis, because the hypotheses we generally make are made in our head when we are not in some kind of laboratory setting. By that I mean we constantly make hypotheses about subjects before we observe the results of our ‘experiments.’
Anyway, we toss a block of sodium chloride in a bucket of water and observe that an explosion does not immediately follow. We had previous observations from former experiments, or general observations, to suggest that stuff with sodium will explode, but our observational evidence suggests that our previously made hypothesis is not true.
We make plenty of observations before making hypotheses, but we always make some sort of hypothesis before making some sort of observation when starting an experiment of any manner.
Most people don’t even go as far as to make a hypothesis there. It’s arguable that every time someone forms a question, it is the same as forming a hypothesis; purely because questions can be reworded as hypotheses. However, in the case of someone who is just exploring, they would not go so far as to hypothesize.
It’s normal for the person to say something like “I wonder if it happens with all sodium compounds” or, “I wonder if there are any sodium compounds that explode as well”. But in these cases, there is no basis and no reason to form a hypothesis. one could argue that the person is making a hypothesis like “all sodium compounds explode in water”; but the person doesn’t care. The person could just as easily make the hypothesis “no sodium compounds explode in water”. And there’s no reason to make either of these, or any hypothesis at all, because no theory has been formed that can be tested.
And further, making a hypothesis like this limits the amount of new information that can come in from these experiments. The information is now limited to “whether or not the substance explodes”, when there are plenty of other reactions that can happen. The person who makes this hypothesis is liable to miss small bubbles appearing. That is anti-desired when exploring, when trying to observe as much as possible so as to build a theory.
The point is that the person in your example is not doing a hypothesis experiment, the person is doing an exploration experiment. Unless a theory exists, there’s no basis for choosing any hypothesis at all.
Yet, let’s say then that the person discovered some cool stuff and started to build a theory. He wants to tell you about his in-progress theory. Obviously he hasn’t done any hypothesis experiments, because hypotheses haven’t mattered yet. He tells you about his observations, and his conjectures. Many people, in response to this, say “can you prove it? Why should I believe you?”. To which he has no answer, because he has nothing to prove yet. All of his observations are just observations, and he has no solid theories. Because he has no theories, any temporary hypotheses he makes continuously jump around, and to an outside observer have no coherence or meaning. Any attempt at proving something will prove futile, and will be a waste of time, purely because there is nothing to prove.
The higher-level or higher-class version of this response is: “what are your credentials? Why should I believe you?”.
In this way my comment does relate to the entire post. Often times, there is no true objection. Often times, the objection is merely that someone is mentally lazy and doesn’t want to think or explore. Often times, the objection is that I haven’t formed a complete theory yet, only a list of observations and conjectures, so there’s nothing the person can believe in. The difference in opinion there is that I want to work with them and believe in nothing, and they want to work on their own and believe in something. It’s not that they object to the theory or observations or conjectures, they just object to thinking about it.
i’d say: you don’t have a phd, therfore you’re not qualified to judge whether or not yudkowsky should have a phd.
How does getting a Ph. D. (even in a related field) give one the qualification to judge?
it doesn’t, it’s a jocular reductio ad absurdum based on the irrationality of the underlying premise. 9_9 and what field are we talking here, education? phdology?
If we’re commenting on the same article, a Ph. D. in (in decreaing order of relevance) AI, Machine Learning, CS, or mathematics...
naw, i’m talking about what field qualifies you to judge how much not having a phd disqualifies you from judging statements on that subject.
When I was in Sales, we called this “finding their true objection.”
Basically, if someone says “Well, I don’t want it unless it has X!” You say “What if I could provide you with X?”
So if someone says “Come back when you have a PhD!” You say “What if I could provide you with PhDs who believe the same idea?” If they then say “There are tons of PhDs who believe crazy things!” then you say “Then what else would I need to convince you?”
Usually, between them dismissing their own criteria and the amount of ideas they can bring forward, you can bring it down to about three things. I’ve seen 5, but that was a hard case. Those aren’t hard and fast rules: the rule is make sure you get them ALL, and make it specific, something like:
“So, if I can get you a published book by a PhD, respected in a field relevant to X, AND I can provide you with a for-profit organization that is working to accomplish goals relevant to X, AND I can make a flower appear out of my ear (or whatever)” THEN you will admit you were wrong and change your view?
And if you’re REALLY invested, you should have been taking notes, and get them to ‘initial’ (not sign, people hate signing but will often initial: it feels like a smaller pain) the list. Consistency bias is also your friend here: if they say it aloud, they will probably also initial it.
And then, if you hand it all to them on a silver platter, with the right presentation you can get a “you were right, I was wrong” out of them. (If you screw it up, you can get begrudging acceptance. Occasionally hostility if you really botch it. But that’s life in the interpersonal world.)
It sounds like a lot, but oddly, it isn’t usually very hard to get people to change their minds this way. It takes some time, so you’d better be invested in making that change. If you know what to expect, handing it all early helps. But if you really want it to happen, this way works.
“Come back when you have a PhD” typically doesn’t mean “if you have a PhD, I’ll believe you”, it means “if you have a PhD, I’ll assign a high enough probability to you having something worthwhile to say that it’s worth even listening to you and doing the work to figure out where you might be wrong”.
Also, it can’t be answered by “I could show you some PhDs who agree with me”. As there are a lot of PhDs in the world, finding one out of that large number who agrees with you doesn’t update the probability of being right by much compared to combining PhD and agreement in a specific person named in advance (such as yourself).
Furthermore, human beings are neither unsafe genies nor stupid AIs, and nobody will take kindly to you trying to act like one by giving someone something that matches their literal request but any human being with common sense can figure out isn’t what they meant.
My request would probably be something like “come back when you have a PhD and get your observations published in a peer-reviewed journal”.
It would be awesome to see a transcript of a back and forth between Eliezer and Robin, or any two people denating, where they both provided this info!
Well, yeah.
You spend a lot of your time worrying about how to get an AI to operate within the interests of lesser beings. You also seem to spend a certain amount of time laying out Schelling fences around “dark side tactics”. It seems to me that these are closely related processes.
As you have said, “people are crazy, the world is mad”. We are not operating with a population of rational decision makers. Even the subset of the population that appears to be rational decision makers often aren’t; signaling rationality convincingly to other non-rational agents is often easier than actually being rational.
It seems that often, the things that you label “dark side” tactics are the only tools available for rational beings to extract resources out of non-rational animals, even if those resources are to be used explicitly for the non-rational animals’ benefit. (I’ll leave it for one of the Reactionaries to present the argument that you shouldn’t bother worrying about their benefit; I happen to share most of your ethical goals, so I doubt I could do the argument justice.)
Over the past five years, have you found that it’s become easier to navigate the wider social arena under your own rules, or are your goals and assertions still running into the same credibility problems? (I’m hoping it’s the former, as I could use a hopeful glimmer, but reality is what it is and I will deal with it regardless.)
.
I am not very impressed by that.
“Would you change your mind if you were convinced of X” carries the connotation “if I managed to give you an argument for X, and you couldn’t rebut it, would you change your mind?” The answer to that should be “no” for many values of X even if the answer to the original question is “yes”. The fact that you couldn’t rebut the argument may mean that it’s true. It may also just mean the argument is full of holes but the person is really good at convincing you. How do you know that the person who convinced you of X isn’t another case of Eliezer convincing you to let the AI out of a box?
If a lot of scientists or other experts vetted the claim of such an X and it was not only personally convincing, but had a substantial following in the community of experts, then I might change my mind.
.
That seems to suggest you believe peer review is a bad idea. Is that true?
I’ll believe you’re a gold miner when I see your gold.
Numenta http://www.numenta.org is building smart software brains that simulate hardware brains, with the ultimate goal of building energy-inexpensive hardware brains. Vicarious is building smart hardware brains. Robobrain is building—and educating—smart hardware brains. IBM is building smart hardware brains, with its Neuromorphic computing project. OpenCog is building a software system, which is very good, but of unknown (to me) competitiveness with the other projects. Honda is building smart hardware brains, and teaching them. Jürgen Schmidhuber and Andrew Ng are building non-human brain-like machines and educating them. Hugo de Garis was building smart hardware brains (and dropped out to be an educator, I guess).
In any event, building context-independent brainlike systems seems to be the way to go. In much the same way that decent parents don’t allow one child to beat up and even murder their other kids, I imagine that such synthetic intelligences, given time, would disallow the stagnation caused by omnipresent and dominant sociopathy.
This is also why it’s important to measure the success of organizations by their stated goals, and by their incremental results on the path to those goals. A more nimble, more adventurous organization would be manufacturing chipsets now, based on the latest science. Such an organization would give up some control in order to retain influence. You will never be in control of a superhuman synthetic intelligence.
But almost all parents have influence (not control) over their children. The smartest parents steer their children into self-building directions.
That is all the power that anyone at the human level has, in comparison to superhuman, self-directing intelligences.
Most or all of the prior brain projects listed, by the way, lack mirror neurons, and do what sociopaths or non-sociopath humans tell them to do. They will increasingly do what sociopaths tell them to do, because sociopaths print the money, and dominate government positions of top-down control. The original intention of LessWrong is valuable, but its standards may be too much of a top-down variety, and too little of a bottom-up variety.
You can’t stop the sociopath from existing. He exists. And, he has intentionality, and will seek to corrupt your systems. But there are lots of Tim Murphys and George Washingtons out there. There are lots of Henry David Thoreaus, George Orwells, Lysander Spooners, and Doug Stanhopes out there, even if they are vastly outnumbered by neutral-by-default brains, and sociopaths. The two ends of the spectrum of intelligence compete for the allegiance of the neutral-by-default brains. There are so many human minds that they can’t all be Round pegs beaten into square holes, no matter how much the education bureaucrats try.
Really, at a deep level, LessWrong should be an educational organization that seeks to build hardware brains and then educate them properly. This then solves previously intractable problems like rising totalitarianism, space travel, etc., if only out of boredom. Perhaps it even solves the problem of human stupidity, supermodifying us all.
Ray Kurzweil has the right idea, but there are too few Kurzweilian projects. Evolution got it right by making lots of human kids. If human lifespan was 1,000, there would have been more family planning, and fewer kids, and therefore, fewer geniuses of the caliber necessary to adapt to brutal and changing environments, including those of Manichean sociopathy. (Also, the Manichean devils themselves, like Stalin and Pol Pot, would have not died off.)
Human brains have proven that it’s possible to build very smart PhD scientists who don’t care if the world around them becomes a Stalinist hellhole, so long as the temperature of the water is raised incrementally, and they have some reason to think they will be shielded from its boiling. So, the goal of LessWrong is to build classical liberal robots, who oppose all taxation, but especially property taxation (so that large numbers of humans aren’t rendered homeless and displaced).
Smart doesn’t necessarily = “libertarian.” However, “smartest, given reality” does. This is why virtually everyone thinks it’s a good thing that slavery ended. Once a system has decided correctly, opposition drops to nearly zero.
Right now, the libertarian and single-issue resisters are growing in number, because it makes no sense to allow powerful sociopaths to siphon away the value of everyone who trades in “Federal Reserve Notes.” It makes no sense to allow them to educate our kids, or subject our kids to a dumbed-down society/world of their creation.
As the years go on. I’m glad to review this and appreciate that you understand this. You definitely have a group of people that love you more than like you, and it is somewhat disheartening to see these people so vehemently they insist on their rejection without even giving some modicum of chance.
I only get more motivation to put in my extraordinary effort and see in what ways I can help.
There are some views of Yudkowsky I don’t necessarily agree with, and none of them have anything to do with him having or not having a PhD.
Are you sure this type of rejection (or excuse of a rejection) is common and significant?
Would you be more inclined to agree with him if he did in fact have a PhD (in the relevant fields)? If your (honest) answer to this question is “yes”, then your rejection does have something to do with him not having a PhD.
Based on personal experience, I would say so, yes.
I’d be more inclined to agree with him if he was God, too, but I wouldn’t say “my rejection of his ideas has something to do with the fact that he is not God”. For that matter I would be more inclined to agree with him if he used mind control rays on me, but “I reject Eliezer’s ideas partly because he isn’t using a mind control ray on me” would be a ludicrous thing to say.
“My rejection has to do with X” connotes more than just a Bayseian probability estimate.
Both of your hypothetical statements are correct, and both of them would be bad reasons to believe something (well, I’m a bit fuzzy on the God hypothetical—is God defined as always correct?), just as the presence or absence of a PhD would be a bad reason to believe something. (This is not to say that PhD’s offer zero evidence one way or the other, but rather that the evidence they offer is often overwhelmed by other, more immediate factors.) The phrasing of your comment gives me the impression that you’re trying to express disagreement with me about something, but I can’t detect any actual disagreement. Could you clarify your intent for me here? Thanks in advance.
I disagree that if you would be less likely to reject his ideas if X were true, that can always be usefully described as “my rejection has something to do with X”. The statement “my rejection has something to do with X” literally means “I would be less likely to reject it if X were true”, but it does not connote that.
All right, I can accept that. So what does it connote, by your reckoning?
Generally, that you would be less likely to reject it if X were true, and a couple of other ideas:
X is specific to your rejectiion—that is, that the truthfulness of X affects the probability of your rejection to a much larger degree than it affects the probability of other propositions that are conceptually distant.
The line of reasoning from X to reducing the probability of your rejection proceeds through certain types of connections, such as ones that are conceptually closer.
The effect of X is not too small, where “not to small” depends on how strongly the other factors apply.
(Human beings, of course, have lots of fuzzy concepts.)
An interesting question, and not an easy one to answer in the way I could be sure you understood the same thing what I meant.
My original thought when composing the comment was that it never occurred to me that “he doesn’t have a PhD so his opinion is less worth”, and I would never use the fact that he doesn’t have a PhD, neither in an assumed debate with him nor for any self-assurance. This means that even if I answered with a “clear yes” to your question, it still wouldn’t mean that it was the cause of the rejection.
Loss of credit because he doesn’t have a PhD does not necessarily equal the gain of credit because he does have a PhD. Yes, I realize that this is the most attackable sentence in my answer.
The disagreements are more centered on personal opinion, morality, philosophical interpretation, opinion about culture and/or religion, and in part on interpreting history. So, mostly soft sciences. This means that there would be no relevant PhD in these cases (a PhD in a philosophical field wouldn’t matter to me as a deciding factor)
On the other hand, if it was a scientific field, then I might unconsciously have given him a little higher probability of being right if he did have a PhD in the field, but this would be dwarfed by the much larger probability gain caused by him actually working in the relevant field, PhD or not. As of yet I don’t have any really opposite views about any of his scientific views. Maybe I hold some possible future events a little more or less probable, or am unsure in things he is very sure about, but the conclusion is: I don’t have scientific disagreements with him as of yet.
About whether this type of rejection is common: if we take my explanation at the beginning of this answer, then I guess it is uncommon to reject him in that way (and his article might lean a tiny little bit in the direction of a strawman argument: “they only disagree because they think it’s important that I don’t have a PhD, so this means they just don’t have better excuses”). If we take your definition, then I agree that it might be higher.
The two comments you and Jiro wrote while composing mine actually made me think about a possibly unclear formulation: I should have written “are not based in any significant way on” instead of “don’t have anything to do with”.
You actually just described what I wrote as: “Loss of credit because he doesn’t have a PhD does not necessarily equal the gain of credit because he does have a PhD”
I think there’s a slight misconception about Aumann’s agreement theorem here: “Common knowledge”, as Aumann defines it (and is leveraged in the proof), doesn’t just mean exchanging beliefs once: common knowledge means 1) knowing how they update after the first pass, then 2) knowing how they updated after knowing about the first pass, 3) knowing about how they updated after knowing about updating about knowing about the first pass, and so on
It’s only at the end of a potentially infinite chain of exchanging beliefs, that two rational agents are guaranteed to have the same beliefs. After a “first exchange”, two rationalists can very well still disagree, just less, and I’d honestly be kind of worried if they perfectly agreed with eachother after a first exchange