The Correct Contrarian Cluster
Followup to: Contrarian Status Catch-22
Suppose you know someone believes that the World Trade Center was rigged with explosives on 9/11. What else can you infer about them? Are they more or less likely than average to believe in homeopathy?
I couldn’t cite an experiment to verify it, but it seems likely that:
There are persistent character traits which contribute to someone being willing to state a contrarian point of view.
All else being equal, if you know that someone advocates one contrarian view, you can infer that they are more likely than average to have other contrarian views.
All sorts of obvious disclaimers can be included here. Someone who expresses an extreme-left contrarian view is less likely to have an extreme-right contrarian view. Different character traits may contribute to expressing contrarian views that are counterintuitive vs. low-prestige vs. anti-establishment etcetera. Nonetheless, it seems likely that you could usefully distinguish a c-factor, a general contrarian factor, in people and beliefs, even though it would break down further on closer examination; there would be a cluster of contrarian people and a cluster of contrarian beliefs, whatever the clusters of the subcluster.
(If you perform a statistical analysis of contrarian ideas and you find that they form distinct subclusters of ideologies that don’t correlate with each other, then I’m wrong and no c-factor exists.)
Now, suppose that someone advocates the many-worlds interpretation of quantum mechanics. What else can you infer about them?
Well, one possible reason for believing in the many-worlds interpretation is that, as a general rule of cognitive conduct, you investigated the issue and thought about it carefully; and you learned enough quantum mechanics and probability theory to understand why the no-worldeaters advocates call their theory the strictly simpler one; and you’re reflective enough to understand how a deeper theory can undermine your brain’s intuition of an apparently single world; and you listen to the physicists who mock many-worlds and correctly assess that these physicists are not to be trusted. Then you believe in many-worlds out of general causes that would operate in other cases—you probably have a high correct contrarian factor—and we can infer that you’re more likely to be an atheist.
It’s also possible that you thought many-worlds means “all the worlds I can imagine exist” and that you decided it’d be cool if there existed a world where Jesus is Batman, therefore many-worlds is true no matter what the average physicist says. In this case you’re just believing for general contrarian reasons, and you’re probably more likely to believe in homeopathy as well.
A lot of what we do around here can be thought of as distinguishing the correct contrarian cluster within the contrarian cluster. In fact, when you judge someone’s rationality by opinions they post on the Internet—rather than observing their day-to-day decisions or life outcomes—what you’re trying to judge is almost entirely cc-factor.
It seems indubitable that, measured in raw bytes, most of the world’s correct knowledge is not contrarian correct knowledge, and most of the things that the majority believes (e.g. 2 + 2 = 4) are correct. You might therefore wonder whether it’s really important to try to distinguish the Correct Contrarian Cluster in the first place—why not just stick to majoritarianism? The Correct Contrarian Cluster is just the place where the borders of knowledge are currently expanding—not just that, but merely the sections on the border where battles are taking place. Why not just be content with the beauty of settled science? Perhaps we’re just trying to signal to our fellow nonconformists, rather than really being concerned with truth, says the little copy of Robin Hanson in my head.
My primary personality, however, responds as follows:
Religion
Cryonics
Diet
In other words, even though you would in theory expect the Correct Contrarian Cluster to be a small fringe of the expansion of knowledge, of concern only to the leading scientists in the field, the actual fact of the matter is that the world is *#$%ing nuts and so there’s really important stuff in the Correct Contrarian Cluster. Dietary scientists ignoring their own experimental evidence have killed millions and condemned hundreds of millions more to obesity with high-fructose corn syrup. Not to mention that most people still believe in God. People are crazy, the world is mad. So, yes, if you don’t want to bloat up like a balloon and die, distinguishing the Correct Contrarian Cluster is important.
Robin previously posted (and I commented) on the notion of trying to distinguish correct contrarians by “outside indicators”—as I would put it, trying to distinguish correct contrarians, not by analyzing the details of their arguments, but by zooming way out and seeing what sort of general excuse they give for disagreeing with the establishment. As I said in the comments, I am generally pessimistic about the chances of success for this project. Though, as I also commented, there are some general structures that make me sit up and take note; probably the strongest is “These people have ignored their own carefully gathered experimental evidence for decades in favor of stuff that sounds more intuitive.” (Robyn Dawes/psychoanalysis, Robin Hanson/medical spending, Gary Taubes/dietary science, Eric Falkenstein/risk-return—note that I don’t say anything like this about AI, so this is not a plea I have use for myself!) Mostly, I tend to rely on analyzing the actual arguments; meta should be spice, not meat.
However, failing analysis of actual arguments, another method would be to try and distinguish the Correct Contrarian Cluster by plain old-fashioned… clustering. In a sense, we do this in an ad-hoc way any time we trust someone who seems like a smart contrarian. But it would be possible to do it more formally—write down a big list of contrarian views (some of which we are genuinely uncertain about), poll ten thousand members of the intelligentsia, and look at the clusters. And within the Contrarian Cluster, we find a subcluster where...
...well, how do we look for the Correct Contrarian subcluster?
One obvious way is to start with some things that are slam-dunks, and use them as anchors. Very few things qualify as slam-dunks. Cryonics doesn’t rise to that level, since it involves social guesses and values, not just physicalism. I can think of only three slam-dunks off the top of my head:
Atheism: Yes.
These aren’t necessarily simple or easy for contrarians to work through, but the correctness seems as reliable as it gets.
Of course there are also slam-dunks like:
Natural selection: Yes.
World Trade Center rigged with explosives: No.
But these probably aren’t the right kind of controversy to fine-tune the location of the Correct Contrarian Cluster.
A major problem with the three slam-dunks I listed is that they all seem to have more in common with each other than any of them have with, say, dietary science. This is probably because of the logical, formal character which makes them slam dunks in the first place. By expanding the field somewhat, it would be possible to include slightly less slammed dunks, like:
Rorschach ink blots: No.
But if we start expanding the list of anchors like this, we run into a much higher probability that one of our anchors is wrong.
So we conduct this massive poll, and we find out that if someone is an atheist and believes in many-worlds and does not believe in p-zombies, they are much more likely than the average contrarian to think that low-energy nuclear reactions (the modern name for cold fusion research) are real. (That is, among “average contrarians” who have opinions on both p-zombies and LENR in the first place!) If I saw this result I would indeed sit up and say, “Maybe I should look into that LENR stuff more deeply.” I’ve never heard of any surveys like this actually being done, but it sounds like quite an interesting dataset to have, if it could be obtained.
There are much more clever things you could do with the dataset. If someone believes most things that atheistic many-worlder zombie-skeptics believe, but isn’t a many-worlder, you probably want to know their opinion on infrequently considered topics. (The first thing I’d probably try would be SVD to see if it isolates a “correctness factor”, since it’s simple and worked famously well on the Netflix dataset.)
But there are also simpler things we could do using the same principle. Let’s say we want to know whether the economy will recover, double-dip or crash. So we call up a thousand economists, ask each one “Do you have a strong opinion on whether the many-worlds interpretation is correct?”, and see if the economists who have a strong opinion and answer “Yes” have a different average opinion from the average economist and from economists who say “No”.
We might not have this data in hand, but it’s the algorithm you’re approximating when you notice that a lot of smart-seeming people assign much higher than average probabilities to cryonics technology working.
- On Bounded Distrust by 3 Feb 2022 14:50 UTC; 135 points) (
- Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields by 15 Feb 2011 9:17 UTC; 100 points) (
- What should experienced rationalists know? by 13 Oct 2020 17:32 UTC; 88 points) (
- K-types vs T-types — what priors do you have? by 3 Nov 2022 11:29 UTC; 73 points) (
- Common sense as a prior by 11 Aug 2013 18:18 UTC; 56 points) (
- 30 Jan 2010 20:59 UTC; 54 points) 's comment on Complexity of Value ≠ Complexity of Outcome by (
- 31 Dec 2013 4:56 UTC; 41 points) 's comment on Critiquing Gary Taubes, Part 4: What Causes Obesity? by (
- Look for Lone Correct Contrarians by 13 Mar 2016 16:11 UTC; 33 points) (
- Is my view contrarian? by 11 Mar 2014 17:42 UTC; 33 points) (
- What should I have for dinner? (A case study in decision making) by 12 Aug 2010 13:29 UTC; 29 points) (
- What is bunk? by 8 May 2010 18:06 UTC; 26 points) (
- What Motte and Baileys are rationalists most likely to engage in? by 6 Sep 2021 15:58 UTC; 22 points) (
- 12 Jan 2015 2:43 UTC; 21 points) 's comment on Open thread, Jan. 12 - Jan. 18, 2015 by (
- TakeOnIt: Database of Expert Opinions by 5 Jan 2010 20:54 UTC; 21 points) (
- [Link] Five ways to classify belief systems by 17 Jan 2012 21:08 UTC; 18 points) (
- 20 Jan 2011 14:39 UTC; 18 points) 's comment on Theists are wrong; is theism? by (
- 8 May 2010 18:48 UTC; 15 points) 's comment on What is bunk? by (
- 5 Sep 2014 18:51 UTC; 14 points) 's comment on “NRx” vs. “Prog” Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) by (
- 28 Dec 2013 0:04 UTC; 14 points) 's comment on Critiquing Gary Taubes, Part 3: Did the US Government Give Us Absurd Advice About Sugar? by (
- 14 Jul 2012 7:26 UTC; 14 points) 's comment on WorldviewNaturalism.com: A “landing page” for scientific naturalism by (
- 3 Apr 2013 1:18 UTC; 13 points) 's comment on Open Thread, April 1-15, 2013 by (
- 15 Feb 2010 22:19 UTC; 13 points) 's comment on Boo lights: groupthink edition by (
- 26 Nov 2011 22:38 UTC; 11 points) 's comment on Video: Skepticon talks by (
- 14 Sep 2010 5:59 UTC; 10 points) 's comment on Intellectual Hipsters and Meta-Contrarianism by (
- 12 Aug 2010 14:25 UTC; 10 points) 's comment on What should I have for dinner? (A case study in decision making) by (
- Don’t You Care If It Works? - Part 1 by 29 Jul 2015 14:32 UTC; 8 points) (
- 4 Apr 2010 4:41 UTC; 8 points) 's comment on Bayesian Collaborative Filtering by (
- 1 Jan 2011 15:30 UTC; 8 points) 's comment on Question about learning from people you disagree with. by (
- 27 Jun 2011 3:15 UTC; 8 points) 's comment on What can we gain from rationality? by (
- 2 Nov 2010 8:30 UTC; 8 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 4 by (
- 15 May 2013 9:23 UTC; 7 points) 's comment on Avoiding the emergency room by (
- 29 Jan 2012 4:38 UTC; 5 points) 's comment on What are your benchmarks of rationality? by (
- 20 Oct 2010 21:56 UTC; 5 points) 's comment on Rational Regions? by (
- 6 Mar 2011 3:16 UTC; 5 points) 's comment on Blues, Greens and abortion by (
- 22 Oct 2011 16:51 UTC; 5 points) 's comment on Rationality Quotes October 2011 by (
- 16 Mar 2012 20:42 UTC; 5 points) 's comment on Global warming is a better test of irrationality that theism by (
- 21 Jan 2012 17:45 UTC; 5 points) 's comment on Open Thread, January 15-31, 2012 by (
- 30 Sep 2014 22:02 UTC; 5 points) 's comment on Open thread, Sept. 29 - Oct.5, 2014 by (
- 22 Oct 2019 21:17 UTC; 5 points) 's comment on Deleted by (
- 15 Jan 2022 19:57 UTC; 4 points) 's comment on Authorities and Amateurs by (
- 18 Dec 2010 11:50 UTC; 4 points) 's comment on Outside the Laboratory by (
- [LINK] Creationism = High Carb? Or, The Devil Does Atkins by 11 Nov 2010 3:41 UTC; 4 points) (
- 18 Jun 2010 3:02 UTC; 4 points) 's comment on Do you have High-Functioning Asperger’s Syndrome? by (
- 8 Aug 2021 22:48 UTC; 3 points) 's comment on Covid vaccine safety: how correct are these allegations? by (
- 16 Sep 2015 20:20 UTC; 3 points) 's comment on Open thread, Sep. 14 - Sep. 20, 2015 by (
- 23 May 2023 14:11 UTC; 3 points) 's comment on A Manifold market notice: Binance by (
- 9 May 2010 17:28 UTC; 3 points) 's comment on What is bunk? by (
- 19 Jan 2012 18:14 UTC; 3 points) 's comment on The problem with too many rational memes by (
- 3 Jul 2010 19:51 UTC; 2 points) 's comment on Open Thread: July 2010 by (
- 30 Jan 2010 21:53 UTC; 2 points) 's comment on Complexity of Value ≠ Complexity of Outcome by (
- 11 Oct 2014 23:56 UTC; 2 points) 's comment on Open thread, Oct. 6 - Oct. 12, 2014 by (
- 9 Oct 2012 4:12 UTC; 1 point) 's comment on Firewalling the Optimal from the Rational by (
- 24 Mar 2012 1:03 UTC; 1 point) 's comment on Open question on the certain ‘hot’ global issue of importance to FAI by (
- 23 Feb 2010 3:49 UTC; 1 point) 's comment on Woo! by (
- 13 Apr 2012 21:00 UTC; 0 points) 's comment on Why I Moved from AI to Neuroscience, or: Uploading Worms by (
- 14 Aug 2010 8:54 UTC; 0 points) 's comment on Taking Ideas Seriously by (
- 3 Sep 2013 23:02 UTC; 0 points) 's comment on Open thread, September 2-8, 2013 by (
- 28 Dec 2009 6:58 UTC; 0 points) 's comment on Scaling Evidence and Faith by (
- 6 Sep 2010 17:08 UTC; 0 points) 's comment on Something’s Wrong by (
- 20 Apr 2012 4:41 UTC; -1 points) 's comment on How can we get more and better LW contrarians? by (
- Working Through the Controlled Demolition Conspiracy by 30 Jan 2012 16:11 UTC; -2 points) (
- 10 Jun 2012 8:17 UTC; -7 points) 's comment on Ask an experimental physicist by (
I don’t understand how many worlds can be a slam dunk for someone who doesn’t understand all the math behind quantum physics.
If a significant number of people who do understand this math believe that many-worlds is wrong, then no matter how convincing I find your non-mathematical arguments in favor of many-worlds isn’t it rational for me to still assign a significant probability to the possibility that many worlds isn’t correct?
Doesn’t physics all come down to math, meaning that people who can’t follow the math should put vastly more weight on polls of experts than on their own imperfect understanding of the field?
Slam dunk in reality, not in the mind. You’re not looking for people who only get easy slam dunks, you’re looking for people who are actually right. So you should start with items that you’re sure are actually right—not that are easy. It’s the respondents’ job to get there, whether by choosing the right physicists to trust, or doing the math on their own… it’s not your job to decide in advance how they find the truth, maybe you’ll learn something! But the items you use as keys do have to be true.
Along with 99% of humanity my IQ isn’t high enough for me to ever understand the math behind quantum physics. So I can’t do the math myself, or figure out which physicist to trust when the physicists disagree.
Given your IQ and information set many worlds might be a slam dunk. But I submit that anyone with my IQ or lower would necessarily be irrational to think that many worlds is a slam dunk.
This may be a tangential point, but I need to say this somewhere: claims like this are quite likely false. (Notice how rarely they’re accompanied by justification.)
Quantum mechanics is new (in the scheme of things). So, of course, we see right now that the only people who understand it are very smart people: the ones who first thought of it and their students and associates. But that doesn’t mean that no one else can understand it; it just hasn’t had time to trickle down into everyone’s general education yet.
300 years ago, you could have replaced “quantum” by “classical” in that sentence, and it would have seemed reasonable: at that time, only a few dozen people in the world understood the differential and integral calculus. Yet now this kind of mathematics is taught regularly to hordes of IQ 110 college freshmen, and (I expect) is considered elementary and routine by a majority of LW readers. Taking an Outside View approach here, I don’t see any reason not to expect that the same trend will continue into the future, with quantum mechanics eventually being considered a grade-school subject (even without recourse to transhumanist solutions such as intelligence enhancement, which will immediately come to the minds of many readers).
Going back further, once upon a time literacy was an elite skill. Now we take it for granted, but how much do you really think our IQs have improved in the last couple thousand years?
And let’s not forget that even now, we already know that the fundamental mathematical ideas behind quantum mechanics are actually quite simpler than you would have thought from listening to physicists—little more than linear algebra over complex vector spaces.
You should look at the SAT math test to get an estimate of the percentage of Americans for which “linear algebra over complex vector spaces” can ever be simple.
I don’t disagree, but keep in mind that these people went through horrible learning processes to get there.
I simply refer you again to my comment above. It applies to linear algebra as much as quantum mechanics.
Comments edited for clarification.
A lot! Western IQ scores have improved by ~30 points since IQ tests were invented around a century ago. And literacy is probably part of a positive feedback loop that historically boosted IQ: increased literacy improves IQ, and higher IQ increases literacy. That feedback loop likely hasn’t been going for two thousand years, but it’s been going for at least two hundred years, which is more than enough time for a feedback loop to go nuts.
Still, though I suspect IQs have improved massively in the last couple thousand years, I definitely agree with your comment. I think the rise in average IQ over time doesn’t mean we’ve gotten qualitatively smarter, more that our environment has—and one aspect of that is the trickle-down effect of mental tools like literacy, classical mechanics, and quantum mechanics.
This seems factually false to me. For starters, the average IQ of college freshman (all colleges, all majors) is more like 115 or 120 (choose the reference you please from Google). And math or physics majors are a cut far above that average, with GRE scores indicating an average around 130. (Prospective grad students, yes, but the ranking fits with high school SAT scores.)
I don’t think very many schools make relativity-level mathematics (or even just multi-variate calculus sufficient to solve Newtonian problems) a core requirement rather than major-specific...
The number 110 was just a guess, of course, but the point clearly stands even if the average IQ of people taking business calculus is 120.
The 17th-century counterparts of these folks would have been illiterate peasants or possibly, in a few cases, local merchants; they would not have been Newton and Leibniz.
http://betterexplained.com/ may change your opinion of some of the “hard” mathematics you have learned. Teaching methods are technology and can be improved.
They don’t have to think it’s a slam-dunk. They just have to choose “Yes” if the choices are “Yes” or “No”.
In effect, you’re encouraging rationalist posers to signal agreement with you on these signature issues. By talking about the signal and its interpretation, you weaken it.
You obviously don’t poll Less Wrong readers using those keys!
Does that mean you’re holding some in reserve?
It is. However, a useful cc-factor metric would focus on topics for which you have a confident belief. If those you get the right answer to those slam dunk topics that you do happen to be confident in then your cc-factor will be high.
A physicist’s overview of MWI’s problems
And another
The math isn’t actually -that- hard. It would be far better to apply yourself to learning it.
I have been told by physicists that it is.
But even if I could understand it, do you really think someone with an IQ below 100 could? Trust me, as a college professor I know that you need an above average IQ to have a good understanding of even calculus. So I can’t imagine that the math behind quantum physics is accessible to over 50% of humanity.
C.f. above: You need an above-average IQ to learn calculus in spite of the American educational system. We have no idea what genetic IQ is required to learn calculus.
I’ve personally lowered the IQ needed to understand Bayes’s Theorem by browsing online, and if I rewrote the page today I bet I could drop it another 10 points.
Please do.
From what I’ve seen of the actual math, if you can understand the content of a typical Calc 3 course (which covers multivariable calculus), you can understand the math of quantum mechanics. If you can get an engineering degree (which is not an easy feat, but it’s something an awful lot of people manage to do), you should be smart enough to do quantum mechanics calculations.
Electrical engineering occasionally relies on quantum mechanical properties of semiconductors and other materials in their products. Then again, EE is one of the hardest engineering disciplines (or so I hear).
In many cases, engineers can get by with relatively simple empirical models to describe devices that depend on quantum mechanics to actually work. (Case in point: permanent magnets, which, according to classical electrodynamics, really shouldn’t be able to exist.)
Consider the signaling incentives they have. Do physicists look better or worse if the math they do is seen as harder or as easier?
Contrariwise, I (like many people here) associate mostly with very smart people, so greatly overestimate average intelligence.
Actually the math of quantum mechanics is much more complicated than, say, the Three Body Problem of classical mechanics, which is still unsolved today. It’s not so much because calculus is that hard (assuming someone willing to spend the effort to learn it), but rather that doing any math with undefined functions doesn’t work as well as you might think. What I’m trying to say is that, for all but the simplest quantum mechanics calculations, you can’t actually do the math and instead need to have a computer do a huge amount of calculations for you—and I think that qualifies as hard. (The same, of course, applies to classical problems like the Three Body Problem)
In any case, the math has nothing to do with this question, as you would know if you actually knew about the topic instead of wanting to brag. After all, the different interpretations of the model give the same predictions, and so use the same (or equivalent) math. The difference is in the assumptions behind the interpretation—why do we need to assume a special “non-quantum reality” or worse “special observers” when we get the same results by applying the theory to the whole world such that when we observe a quantum event, we also become entangled with it (with all the usual results).
Another little essay on MWI. tl;dr : Eliezer is wrong on the Internet! Won’t somebody please think of the mind children?...
I have spent many years studying and thinking about interpretations of quantum theory. Eliezer’s peculiar form of dogmatism about many worlds is a new twist. I have certainly encountered dogmatic many-worlds supporters before. What’s exceptional is Eliezer’s determination to make belief in many worlds a benchmark test of rationality in general. He’s not just dogmatic about it as a question of physics, but now he even calls it a rationalist “slam-dunk”, a thing which should be obvious to any sufficiently informed clear thinker, and which can be used to rank a person’s rationality.
My position, I suppose, is that it is Eliezer who is insufficiently informed. He has always been a wavefunction realist—a believer in the existence of the wavefunction—and simply went from a belief in collapse of the wavefunction, to a belief in no collapse. If that was the only choice, he’d have a point. But it is far from being the only choice.
One thing I wonder about (when I adopt the perspective of trying to draw lessons regarding general rationality from this affair) is whether he ought to regard himself as culpable for this error, or whether ignorance is a valid excuse. Yesterday I was promoting “quantum causal histories” as an example of an alternative class of interpretation. Those are rather obscure papers. He’s certainly not at fault for not having heard of them. Yet he surely should have heard of John Cramer’s transactional interpretation, and there’s no trace of it in his writings on this topic.
I suspect that another factor in his thinking is a belief in the minimalism of many-worlds. All you need is the wavefunction. You even get to remove something from the theory—the collapse postulate. But the complexities reenter—and the handwaving begins - when you try to find the worlds in the wavefunction. Naive onlookers to this discussion may think of a world as a point in configuration space. But this is not the usual notion of “world” in the technical literature on many worlds. Worlds are themselves represented by lesser wavefunctions: components of the total wavefunction, or tensor factors thereof. It is a chronic question in many-worlds theory as to which such components are the worlds, or whether one even needs to specify a particular algebraic breakdown of the universal wavefunction as the decomposition corresponding to reality. I don’t even know what Eliezer’s position on this debate is. Is a world a point in configuration space? Is it a blob of amplitude stretching across a small contiguous region of configuration space? What about a wavefunction component which stretches across most of configuration space, and has multiple peaks—is it legitimate or not to treat that as a world? Eliezer is impressed by Robin Hanson’s mangled worlds proposal; should we take Robin’s definition of worlds as the one to use, if we wish to understand his thought?
I don’t object to many-worlds advocates having their theoretical disputes; certainly better that they have them, than that their concepts should remain fuzzy and undeveloped! But I find it very hard to justify this harsh advocacy of many-worlds as obviously superior when the theoretical details of the interpretation remain so confused. The confusion, the unfinished work, seems comparable to that still existing with respect to the zigzag interpretations like Cramer’s. And since a zigzag interpretation only requires a single, basically classical space-time, and does away with the wavefunction entirely except as the sort of probability distribution appropriate to a situation in which causality runs backwards and forwards in time, it has its own claim to elegance and minimalism.
My own position is the anodyne one that Further Research Is Required, and that theoretical pluralism should be tolerated. I respect the rigor of Bohmian mechanics; I don’t believe it is the truth, but working on it might lead to the truth, and the same goes for a number of other interpretations. I tilt towards single-world interpretations because I anticipate that in most completed many-worlds theories (many-worlds theories in which the confusions have truly been resolved, by an exact theoretical framework), you will be able to find self-contained histories, akin to Bohmian trajectories but perhaps metaphorically “thicker” in cross-section. And my ultimate message for many-worlds enthusiasts is that the apparent simplicity of many worlds is an illusion because of the theoretical work necessary to finish the job. You will end up either adding lots of extra structure, or compromising on objectivity and theoretical exactness (e.g. by being blase about what is and is not a “world”).
Now, I’m no quantum expert, but this seems to me to be a criticism based entirely on the name; “It’s called many-worlds, so where are the worlds?” Fine. I hereby rename the theory to “much-world”.
Take “The Conscious Sorites Paradox” (thanks to Zack_M_Davis for the link) and s/person/world/.
Cf. “The Conscious Sorites Paradox”
If accepting Many Worlds is a slam dunk then advocating quantum monadology is surely a technical foul. +1 anecdote to cc-factoring.
Personally, I’m deferring making a decision about many-worlds until such time as I will have a need to make a decision about it (probably never), because it would take a large time investment.
EY’s bringing it up repeatedly as a rationality test worries me a teeny bit. Not because I disagree with him about the particulars, but because bringing up one issue repeatedly into conversations where it seems tangential is a key indicator of schizophrenia, or at least impending crankism. I worry about that with extremely high-g people, particularly when they’re around the age of 30.
It’s common for very high-g people to have a few issues that they are immovable on. Isaac Asimov would not fly on airplanes. I know one very-high-g and one probably-high g person who insist on using text-only web browsers. I know one high-g person who’s a devout Mormon, one who doesn’t believe in evolution, one who refuses to take his benefits from the government or work for a corporation, and one who believes the Jews have always secretly been in control of Russia. I don’t know how to determine whether EY’s position on multi-worlds is rational, or a g-induced fixation.
Some views are contrarian in society at large, but dominant views in particular subcultures. Libertarian views aren’t dominant in economics, but they’re (correct me if I’m wrong) dominant in the economics department at George Mason where Robin Hanson works. Does that make Robin a contrarian, or a conformist?
I bet that most 9/11 conspiracy theorists have a lot of friends who are also conspiracy theorists. Sometimes supporting one contrarian theory (the Jews were behind 9/11) is a way of aligning with a larger, locally-conformist narrative (the Jews are behind everything).
Next thing you know, someone will say Jews are behind the Singularity Institute… um… uh-oh.
Yes, if nearly all “contrarian” views are just conformity with local groups, then there will be no c-factor—just a lot of ideologies that don’t correlate with each other. This was alluded to in the OP.
This could be true of the general population but not of academia/intelligentsia, in which case polling 10,000 respondents there might still work.
This is a reason to collect data not just on what people think, but on who they know, and try to get many pairs of people who know each other to participate in your survey.
Perhaps if the poll is shared by friends on Facebook, this becomes easy?
I’ve read that in the 19th century, there were many people who said that iron ships couldn’t possibly float. If you take a few seconds to do the math, you can quickly verify that iron ships can float. That seems like a good slam-dunk to me. Is there a modern equivalent?
Some similar, not-quite-as-obvious former popular opinions:
Humans can never fly.
Humans can never reach the moon.
(Interestingly, the much older “sailboats can never sail upwind” seems more plausible to me than any of these.)
Contemporary unpopular slam-dunk-yes views:
Computers will someday attain human-level performance on any task you can name.
Technology can enable humans to live to the age of 200.
Evolution. (Still unpopular worldwide.)
There are culture-specific slam-dunks. I noticed, while traveling in China, particularly during an episode when the US bombed a Chinese embassy and when discussing the Tienanmen Square massacre, that most of the Chinese people who spoke openly with me (just a few) simultaneously believe their government is corrupt and untrustworthy, yet believed everything it said about those incidents. Numerous Russians I’ve spoken to have a blindness reconciling their views on Stalin with their views on Putin (the same attributes that made Stalin bad make Putin good). Maybe a foreigner should help us identify our slam-dunks.
There are some slam-dunks that are popular on the low end of g, and on the high end of g, but not in the middle range of g, e.g.
Men and women have different distributions of preferences and cognitive aptitudes.
Using these in your survey could contaminate the results.
Wind-powered directly downwind faster than the wind vehicle!
Solution (spoiler).
I don’t believe it.
Contrary to what the article says, sailboats can’t travel downwind faster than the wind (except briefly, when the wind changes). If this were possible, I would have experienced it.
When the vehicle is moving as fast as the wind, in order to go faster, the energy output from the propeller must be more than the energy input through the wheels. The energy output of the propeller comes entirely from the energy input through the wheels, so this is impossible.
Right?
I’m feeling uncertain, because dozens of people reviewed the article and all agreed that the thing works.
I think that the sailing-faster-than-the-wind or the directly-downwind-faster-than-the-wind (DDFTTW) problems would make for a very interesting contrarian-cluster question, as it has a few features that don’t often coincide in one controversy:
Many ordinary people claim that sailing downwind faster than the wind actually works in practice, not merely in theory.
This claim appears to have the form of “I don’t need to check the details of your perpetual motion machine, I know right off the bat that it can’t work!” It seems blindingly obvious that some principle of physics ought to prevent DDFTTW from working.
The amateur Youtube video for the DDFTTW machine is a very low-status means of demonstration (i.e. it’s just what a crank or faker would do).
However, several of the smartest and most skeptical minds who did the actual computations have averred that the folk wisdom is right, and the “obvious” physics principle is mistaken in its application here!
Just having considered these data points (I haven’t worked through Tao’s or MarkCC’s analyses), I assign very high probability (>99%) to sailing-faster-than-the-wind and DDFTTW working as described.
I expect Robin and Eliezer to agree with this assessment (and, though I expect them both to have updated in the same fashion, I suspect that Robin would have updated faster and with less effort than Eliezer in this instance— though on other types of problems I’d expect the opposite.)
Robin would’ve had to update pretty fast to update faster than I updated. I’m like, “Tao says it works? OK.”
I don’t really find it very counterintuitive. The different velocities of wind and ground are supplying free energy. Turns out you can grab a bunch of it and move faster than the wind? I don’t see how that would violate thermodynamics or conservation of momentum. I haven’t even checked the math; it just doesn’t seem all that unlikely in the first place.
Ah: a focus on negentropy makes the idea more plausible for you at first glance. I was expecting you’d each find it counterintuitive, that Robin would be first to favor the expert consensus, and that you would wait until you’d worked through the full analysis. So I take a hit on my Bayes-score with regard to “things Eliezer finds counterintuitive”.
I find it counterintuitive, but not impossible. it’s this specific implementation that I have trouble with. But the “string” example does appear to work.
Moving faster than the wind is not even counterintuitive; sailboats can, because the mass of the wind is greater than the mass of the boat. Moving downwind faster than the wind is counterintuitive.
Right; I was talking about two linked problems (mentioned together by you), and linked to a discussion of each: sailboats keeling faster than the wind by Tao, and DDFTTW by Chu-Carroll. The characteristics I listed applied to each problem in much the same way, so I discussed them together.
I just worked through this stuff. Chu-Carroll and Tao describe different mechanisms of traveling faster than the wind and they’re both right. Chu-Carroll gives a more detailed explanation here. In Tao’s post, one only needs to parse Figure 4 to be convinced.
In this and other similar cases, restricting ourselves to only meta-level arguments seems unwise. What good is memorizing that DDFTTW is possible because Tao said it is, compared to actually understanding the matter? A good contrarian-cluster question should be more difficult on the object level.
Yes, I’m combining two distinct things here— but both problems have the same characteristics, and might separate out some clusters of contrarians by the heuristics they favor. The fact that one of these heuristics might be “sit down and actually work out the problem yourself” isn’t a bad feature.
EDIT: Oops, “confute” doesn’t mean “combine” at all.
You might have been thinking of “conflate”.
Yep, that’s the one. ETA: Thanks!
Again, Tao did not say that DDFTTW is possible. Tao said that it is impossible. See my comment above. [Retracted later.]
Jump into Figure 4 in Tao’s post, start from 0, follow the red vectors for a half circle in any direction, then fold up the sail, bingo—you’re moving straight downwind 2x faster than the wind. Yes this assumes a pure lift sail and no friction, but you can almost-satisfy both assumptions and still outrun the wind by a big margin.
No. The black vectors show the apparent wind velocity. The red vectors, which are perpendicular to the black vectors, show the resulting boat velocity. You would have to build up speed moving (nearly) perpendicular to the apparent wind, then fold up the sail and steer downwind. Your total travel time to get downwind would be greater than the wind’s travel time, so you would still not outrun the wind.
Read the caption below the figure. Neither red nor black vectors are velocities. Velocity values are denoted by points on the graph plane. The graph is in velocity space, not physical position space. The point 0 is the rest velocity, not the boat’s starting point. The point v_0 means the boat is moving with the wind. The vectors show how the pilot can change the velocity of a boat already moving at a given velocity; they’re acceleration vectors. Black vectors show accelerations possible with a pure-drag sail, red vectors are for a pure-lift sail.
Hmm. I think you’re right. Oops. You can sail downwind faster than the wind. I tried to write up a detailed proof of why it wouldn’t work, and it worked.
Phil, sorry, but you’re wrong. It is possible to travel straight downwind faster than the wind. The mechanisms that Tao outlines don’t have the limitation you think they have. This quote:
doesn’t mean what you think it means. Reread the quote carefully, paying attention to the words “the only dimension”. Then reread the paragraph that follows it in the post, then take a hard look at Figure 4 and the paragraph that follows it, then come back. You’re just embarrassing yourself.
I put the ‘folk wisdom’ on the side of “you can’t go DDFTTW” here. It doesn’t seem obvious from the perspective of physics but perhaps it does from the perspective of ‘common sense’.
Tao shows how it’s possible to move faster than the wind using wind power. I am not disputing this. Tao says in that very same post that it is impossible to sail downwind faster than the wind:
That’s the wrong quote—it refers to a limited situation where cross-wind forces are not being exploited. The next line after your quoted text is:
Now if you’d quoted
that would have supported your assertion. But then Tao goes on to write
so you’re wrong again (sort of—the approach he’s describing is of unknown practicality).
Cross-wind forces cannot be exploited if you are travelling directly downwind. Tacking is done upwind only.
When Tao says “one can also sail at any desired speed and direction”, he obviously doesn’t mean that literally. Unless you also want to say Tao said that sailboats can go faster than light.
He writes, “In theory, one can also sail at any desired speed and direction” (emphasis added). And he means that quite literally. You can travel any desired speed under the theoretical framework that he’s using (which doesn’t take into account relativistic effects, among other things.)
You cannot travel at any desired speed! You can’t travel a million miles an hour in a 5 knot wind because you desire it. And that’s what the person quoting it meant to imply: “Tao says you can travel at any speed and direction; therefore, you can travel downwind faster than the wind.” Correct conclusion, wrong reason.
You yourself quoted him as saying it. As you indicated, you can only make him agree with you by saying that he didn’t “mean that literally”.
At the end of the paragraph, he repeats it even more explicitly: “By alternately using the aerofoil and hydrofoil, one could in principle reach arbitrarily large speeds and directions, as illustrated by the following diagram:”
Are you saying that he didn’t mean “arbitrarily large” literally?
ETA: In the next paragraph, he writes
Emphasis added. v_0 is the velocity of the wind. There’s no room here for reading this as anything other than literal.
That was what I meant. And I see I was wrong. Sorry. It’s such a shocking statement that I didn’t take it seriously at first. In retrospect, the energy influx is continuous, so continuous acceleration is possible.
Do you understand what Tao says in the article? With sufficiently high confidence? (Have you even read it?) Be careful. From the article:
Yes, you’re right.
So you agree that my second quote is more apposite than the quote you provided. Hurray!
Tao obviously intends his analysis to apply whenever Newtonian dynamics is a good approximation, so bringing relativity into it is ignoratio elenchi. You asserted that Tao said that it is impossible to sail downwind faster than the wind; in fact he offered a theoretical approach for doing exactly that.
No he didn’t, as I’ve explained at least 3 times in this thread already, including in the comment you just replied to. He wrote:
“it became possible for sails to provide a lift force which is essentially perpendicular to the (apparent) wind velocity, in contrast to the drag force that is parallel to that velocity.”
Perpendicular to the apparent wind velocity.
As Cyan points out, Tao is saying that you can’t sail directly with the wind faster than the wind if you don’t exploit more than one dimension. But the carts that started this discussion do exploit more than one dimension. Specifically, they exploit the vertical dimension by using the difference in speed between the ground and the air.
Tao’s discussion is not relevant to these carts, as he isn’t discussing DDFTTW.
Yes, they can. A boat sailing 45 degrees off of dead down wind, making downwind progress at the speed of wind (its total speed being square root of 2 times the speed of wind), will feel an apparent wind 45 degrees off its bow, from which it can generate more thrust and go even faster, until the apparent wind is much closer to directly ahead. Modern racing sailboats do this all the time.
A boat sailing 45 degrees off of dead downwind has its sail out very far to leeward, so that the apparent wind will slow it down, not speed it up. You’re thinking of a boat sailing 45 degrees off of upwind. [EDIT: My mistake. When you reach the same speed in the downward direction as the wind, the apparent wind is coming entirely from a direction perpendicular to the wind, and so your sail will be trimmed in perpendicular to the wind and be receiving lift in the same direction as the wind.]
You might be able to move with a component in the downwind direction faster than the wind due to lift—but I wouldn’t bet on it. I can’t swear that it doesn’t happen, because I try never to be in this situation. It’s the easiest way to flip a boat, or to get “windlocked” (when the wind is too strong for you either to pull the sail in or to steer windward, so unless you jibe, you’re stuck going the direction you’re going until the wind dies down).
I always trim the sails for the apparent wind. If the apparent wind is backwinding the sails, I will trim them in, so that they work properly and provide forward thrust. As the boat accelerates on a straight line course, the apparent wind will shift forward and I will trim in the sails.
In the situation you described, the apparent wind is travelling from the tail of the sail to the front, in the opposite direction you would need for it to provide thrust. There is an apparent wind, and you do trim the sail in response to it, but it doesn’t provide thrust when you’re going downwind.
No, in the situation I described, the apparent wind flows from the luff (leading edge) to the leach (trailing edge) of the sail. I have actually done this. I will see if I can produce a diagram later tonight.
Edit: Here is the diagram:
You’re right.
(Completely OT, of course.)
Looking at it in the road’s reference frame, the propeller decelerates the wind — even if the vehicle is already moving at wind speed — and takes kinetic energy from it.
The idea is that the propeller is providing thrust, not taking energy from the wind. It’s rotating in the opposite direction from what you’re suggesting.
The propeller does both. If the vehicle is moving at the same speed as the wind, then in the vehicle frame, the wind is being accelerated backwards (hence momentum is conserved), so in the road frame, the wind is being decelerated and donating energy to the vehicle.
The movement of the wind backwards is coupled to the movement of the vehicle forward; but that’s the effect of the energy, not the source of the energy.
Gain and loss of energy are frame-dependent; in the road frame, the wind certainly is a source of energy (just as a rocket takes kinetic energy from its reaction mass, when looked at in a frame where it has a greater speed than its exhaust). I’m not sure yet how to think about the vehicle frame.
If your intuitions don’t think it will work then two options available are building the device or doing the actual math.
My intuition tells me that the source of the energy is the wind and some of that energy is removed from the wind and ends up on the cart.
Think push not twist. The energy taken from the wind is not in the form of increased rotation of the blade. Rather, it is being pushed along like a sail. It just happens to put some of the energy back into increased rotational energy of the blade by means of gears connected to the ground.
For the gears connected to the ground to take energy out of the ground, it has to slow the vehicle down. You are then trying to speed the vehicle up, through the propeller, using only energy derived from the contact with the ground, which is necessarily less than or equal to the energy loss that the vehicle sustained in order to convert its forward momentum into the rotational energy to turn the propeller.
No, I’m not trying to do that because that wouldn’t work. Energy taken from the ground/vehicle difference is not being used to accelerate the vehicle.
How would you explain its acceleration when the vehicle is traveling at wind speed, in the vehicle’s reference frame? It seems to me — incorrectly, I assume — that the only energy available there is from the ground/vehicle difference.
In the road frame, it is.
Consider a thrusting rocket, looked at in a frame where its exhaust is stationary. The reaction mass is accelerated backwards from v=v_r to v=0, losing kinetic energy, which is added (together with energy supplied by the fuel) to the rocket’s KE. It seems to me that this is basically the same situation.
You’re of course right in the vehicle frame; I’m not sure yet how best to think about that.
The propeller cannot do both. If the wind is being accelerated backwards by the propeller, the propeller is not taking energy from the wind.
Here’s where I am now:
Sailboats can move with a downwind component faster than the wind.
The first (windsock) video shows no evidence that the cart moves downwind faster than the wind.
The string video is more convincing, but I’m not convinced that this particular cart works as advertised. The rational offered for how it works is that when it moves at a velocity v, this causes the propeller to turn at a rate that thrusts air backwards with a velocity greater than v. Hmm… okay, maybe. The propeller blades moving perpendicular to the wind are a lot like the sails of a boat moving perpendicular to the wind.
This ignores mass. Thrust is (mass flow rate) * v_air, so it can get enough thrust by moving enough air at a velocity less than v_car. As for energy, power = 1⁄2 * F * v for air or car, so again, you can get enough thrust if v_air < v_car.
There’s no perpetual motion, because as the originally linked solution says, eventually the apparent headwind becomes too strong. (The above assumes apparent wind is zero.)
In the simplest case, air pushes the propeller forwards, but it doesn’t significantly rotate it. The propeller is a sail. The propeller rotates in such a way that higher speed of the propeller leads to lower resistance with the air (it becomes more in sync with the wind), that is propeller rotates in the opposite direction to what one would normally expect, it “lets the wind through”. If it were a normal cart moving faster than wind with propeller not attached to anything, it would be the same direction of the rotation (but our propeller rotates faster). At the same time, the propeller is rigidly connected to the wheels, so that higher speed of the cart corresponds to faster rotation of the propeller, and, as a result, to lower pressure from the air. When the cart decelerates, the propeller rotates slower, which increases the pressure from the wind on the propeller and thus accelerates the cart.
That’s what I just said. But what’s the energy flow?
I concede that it works, and in basically the same way as the sailboat going downwind that I also said couldn’t work.
Oops, sorry, I wrote an incorrect explanation, having relied on confabulation of fading memories too much. Here is a reworked one:
In the simplest case, air pushes the propeller forwards, but it doesn’t significantly rotate it. The propeller is a sail. If it were a normal cart moving faster than wind with propeller not attached to anything, it would be the same direction of the rotation (but our propeller rotates faster). At the same time, the propeller is rigidly connected to the wheels, which allows it to rotate faster than the headwind would make it. As a result, it is pushed by the wind from behind rather than resisted, which accelerates the cart, which lends power to the propeller to keep on rotating faster than it otherwise would.
It is a bad intuition to see propeller as throwing the air backwards at speed higher than the difference of cart’s speed and the speed of the wind, as the cart is essentially fueled by the resulting hind-wind, not the other way around. Also, the propeller only needs to go a little bit faster than it would because of the headwind.
That’s a pretty good explanation. Another way to look at it is to think what would happen if the propeller was not connected to the wheels. In that situation, the cart would travel as fast as the wind, but the propeller would spin at high speed. If you connect the propeller to the wheels that energy is used to further increase velocity.
In fact, it would work if you place a radio controlled clutch between the propeller and the wheels. First wait for the cart to accelerate to wind speed, and the propeller to rotate faster than the wheels (if it’s 1:1 ratio without gears), then engage the clutch. The end result would be that the wheels would rotate at a higher speed and thus the cart would travel faster than the wind.
The article explicitly refers to ‘tacking sailboats’, which can in fact travel faster than the wind in the downwind direction.
The energy comes from harnessing the difference the difference between the velocity of the wind relative to the velocity of the ground. It may be helpful to refer to the ‘propeller’ as the ‘propellee’. It is there to make sure the wind always has something to push on that is at roughly the same speed as the ground and only uses energy based on losses to drag and friction.
The article says: “It should be obvious that there’s some way to go downwind faster than the wind, because as so many people pointed out, sailboats do it.” Sailboats do not go downwind faster than the wind. I have gone downwind hundreds or thousands of times on many different types of sailboats, and I have never seen the wind indicators streaming behind me as I did so.
Tacking sailboats are going upwind, not downwind.
Well, that’s obvious. By definition of “wind power”.
The propeller is not at the same speed as the ground.
?
Sailing downwind faster than the wind looks and feels like sailing upwind. How often have you, when the tell tales are streaming aft, checked to see if a stationary flag was blowing in the opposite direction?
You could also have sailed on many kinds of boats whose hulls experience too much water resistance before achieving the speed of the wind to accelerate further with the power provided by that wind.
Assume a uniform wind of 20 km/h flowing in the direction from A to B and that B is 100 km from A. Fred is an expert sailor and has a top line sailboat. While Fred is stationary at A, he notices Joe floating past him in a hot air balloon going at the 20 km/h wind speed. Assuming no changes to the wind is it possible for Fred to catch up to Joe using only sailboat before Joe reaches B five hours later?
If you answer ‘no’ then you are incorrect.
If you answer ‘yes’, then understand that this is what people mean when they say it is possible to go downwind faster than the wind.
Has this actually happened? Does this prove anything if it did, given that winds at altitude and ground level are vastly different?
It is impossible for any existing sailboat to have a downwind component that is faster than the wind. If it were possible, you could sail the boat with no wind at all. (This argument does not apply to the ground vehicle under discussion.)
Will you at least agree that it is impossible to sail with the boat pointed directly downwind faster than the wind in a conventional sailboat (including racing sailboats)?
A sailboat can reach faster than the wind because the mass of the wind is greater than the mass of the sailboat, and the energy in the wind is transferred to the boat. Moving downwind faster than the wind is very different, and requires a different mechanism. And you cannot use apparent wind to explain moving downwind faster than the wind, because the apparent wind would be in the opposite direction.
The greater wind at higher elevation would only be an advantage to the hot air balloon.
No. If there is no wind at all, then if the boat moved forward in any direction, it would face an apparent direct headwind opposing its motion. But with some amount of wind, the boat can travel at some angle from dead down wind, so that the apparent wind will not be directly ahead, so that the perpendicular lift will have a forward component.
Yes.
Well, the total mass of all the air that was deflected by the sails as the boat accelerated past wind speed will be greater than the mass of the boat, but that does not really explain what is going on. The point is that the boat is able to continue to derive enough thrust from the apparent wind to overcome drag forces. And this continues to work even when a component of the boat’s velocity is downwind.
It is not exactly the opposite direction, and even a small deviation can be significant.
There are potential ambiguities in the language used. Considering a specific example like this allows us to establish whether we are disagreeing about the physics itself or just using different words. I get the impression that we are disagreeing on the nature of physics itself. Fred can win.
I didn’t want to dwell on technicalities and hoped ‘uniform’ was sufficient to convey my intended meaning.
Yes.
Hence the applicability of the ‘sailboat’ analogy to the vehicle in question.
Possible world too convenient!
Some people claim that a sailboat can move faster than the wind when reaching (moving perpendicular to the wind). When reaching, you let the sail out much farther than you would intuitively expect, until it’s nearly parallel to the wind. It may be operating on lift at that point.
A jumbo jet can’t take off vertically. Therefore, the thrust provided by lift from its wings is greater than the thrust provided by its engines. So perhaps a sailboat can travel faster than the wind when reaching. EDIT: No, that’s wrong. You can’t get more energy out than you put in.
It seems like you’re equating speed with energy. Since the masses are different, couldn’t the energy of the machine be less but the speed be greater?
When talking about the jet, I was talking only about force. “The masses are different”—what masses are you talking about?
Consider the problem from the frame of reference of the wind. The wind is still; you begin with the only energy source being the motion of the ground to the left, and using this energy, you are supposed to cause the vehicle to move to the right. You can’t cause rightward motion without violating the conservation of momentum unless you cause either the air to move to the left, or the ground to move faster to the left.
I just watched the video. It’s a trick, though probably not intentional. The windsock is mounted directly behind the propeller. So when the vehicle moves and the propeller turns, it blows the windsock backwards, and this is the “proof” that the vehicle is travelling faster than the wind.
ADDED: Folks. Think about it. The sock blowing backward is supposed to show that the vehicle is moving faster than the wind. It doesn’t show that; the sock would blow backwards as long as the speed of the vehicle plus the speed of air impelled from the propeller is greater than the speed of the wind. There is no evidence in the video that the vehicle is travelling faster than the wind.
Downvoting this comment is not a vote saying that travelling downwind faster than the wind is possible. This comment does not dispute that. Downvoting this comment is disagreeing with the math and claiming that watching a sock blown backwards by a propeller demonstrates movement faster than the wind.
Did you read the “Solution” post that Vladimir linked to? What about it was unconvincing? (If it’s any comfort, that guy ate enough crow for the both of you ;).)
I read the solution. I don’t need to think too deeply about his long, complicated explanation that begins with a detailed and erroneous description of the gears, because I have a short, simple explanation of how the illusion is generated in the video, and a short, simple explanation of why such a device would violate conservation laws, which I gave below.
It wouldn’t have convinced me, if I thought the thing didn’t work — it never mentions rolling resistance!
What do you mean by that term? The post does say this:
(Emphasis added.)
Okay, it does there, but I refer to this part:
This seems to equivocate between wind/propellor force and net force.
Don’t trust socks? Try this one with shoestrings instead.
I’ll have to think about that.
think think think
It appears to demonstrate the same thing, convincingly.
The sock shows only that at the point where the sock is mounted the air is going backwards relative to the vehicle. What are the implications of that? Where is the energy required to do this coming from? As the vehicle approached the wind speed what would happen to the sock?
From the wind. Which is travelling faster than the vehicle. Except in the immediate vicinity of the propeller.
If the vehicle moved exactly at the speed of the wind, with the propeller moving, the air would appear dead still before taking into account the movement of the propeller, and so the sock would be blown strongly out behind the vehicle, making it look as if the vehicle were moving much faster than the wind.
So the parts of the vehicle that are travelling slower than the wind receive energy from the wind but not the propeller? If I took off the propeller would the vehicle go faster or slower?
The point is that the propeller blows the sock backwards, regardless of whether the vehicle is moving faster than the wind; and this is shown as proof that it’s travelling faster than the wind.
It is a very good feature of the video (from rationality test standpoint) that this piece of evidence is compromised: you are handed a conspiracy theory right away, and so when rationalization kicks in, you know what to point to.
...what? I’m Russian, not much a fan of Putin, but this statement seems insane to me. Here’s what made Stalin bad. Putin doesn’t even begin to compare.
I didn’t say Putin possesses the same attributes to the same degree. But the same adjectives come up. I guess it’s not a very good example, since having X of property Y can be good, while having 2X of property Y is bad.
Yes it would be good, but more expensive, to “survey” the opinions of folks from long ago, to see what were the correlations on views we now think should have been clear then.
Who would dispute that? For starters, about 90% of men are more sexually attracted to females than to males, but only about 10% of women are.
The only thing I’d predict from knowing someone believes in many worlds is that they like science fiction. This isn’t because anything might be happening somewhere, it’s because many worlds is a much more interesting universe.
I’m starting to get the impression I should look into this science fiction thing. I seem to have a lot of traits that correlate with interest in it, and many of the people I associate with love it. It just so happens that I lack familiarity, since I didn’t grow up with it myself.
Yes, do that.
This was Isaac Asimov’s favorite story out of all the ones that he wrote.
I agree that you should look into it, but strongly disagree with both the specific recommendations that are currently present in replies. I suggest either picking from the Hugo Awards list or similar, or finding things in the intersection of science fiction and your other interests—I often find it easier to pick up this kind of new interest gradually.
I recommend John Scalzi, my current favorite science fiction author. Here’s a sampler of his stuff that’s available online for free. (Avoid anything in the Old Man’s War universe until and unless you read the series itself.)
That’s a possible reason to believe in QM and it’s why you also ask them about atheism and p-zombies.
I don’t think that QM, P-Zombies and Many Worlds are good examples at all. Frankly, I tend to think that the use of nuance, the use of careful distinctions between proposed hypotheses rather than the endorsement of slogans is a very good sign. For precisely this reason, however, I thought that http://econlog.econlib.org/archives/2009/12/what_do_philoso.html was a fantastically useless survey. If you summarize a philosophical debate in a slogan and ask people for a ‘yes/no’ answer you should expect the best thinkers to be able to explain exactly what would cause people to endorse each side and how those causes establish or fail to establish correspondence to reality.
The problem with this, of course, is that it motivates fake nuance. The endless proliferation of fake nuance is one of the major products produced (and almost exclusively consumed) by academic philosophers.
So did I, for most part. The best response to some of those questions would be “Sod off. The mistake is asking that question in the first place, and neither answer is meaningful. Reality just doesn’t carve there.”
I suppose that’s why all the questions had ‘other’ as a possible response...
I don’t see what’s “useless” about it; at least, I hear a lot of speculation about what sorts of things philosophers generally think, and there hasn’t previously been a good dataset for that.
A related concern is that I’ve run into a lot of philosophers who think that “practically nobody” believes X, where X is one or another view on a major philosophical question. For example, I sat in a car with a compatibilist and a believer in libertarian free will, each of whom thought their view was nearly universal amongst philosophers. It was an eye-opening experience that maybe a lot of people will have looking at results like this. Even if it’s not representative of philosophers in general, it points out that the number of philosophers who (for instance) “Accept non-physicalism” is not “approximately 0”.
Of course, I might be slightly biased since my dissertation is in part a piece of experimental philosophy, and in part because I’m happy to see that virtue ethics won.
No, it wasn’t a competition, and yes, it is now.
I htink the most interesting thing to do with the survey is ask the people who answered “other” what they had in mind.
Why would you expect someone who has a high correct contrarian factor in one area to have it in another?
Bad beliefs do seem to travel in packs (according to Penn and Teller, and Eliezer, anyhow). Lots of alien conspiracy nuts are government conspiracy nuts as well. That’s not surprising, because bad beliefs are easy to pick up and they seem to be tribally maintained by the same tribe that maintains other bad beliefs.
But good beliefs? Really good ones? They’re difficult. They take years. If you don’t know of Less Wrong (or similar) as a source of good beliefs, you probably only have one set of good beliefs in your narrow area (like economics or quantum physics but not both). And you know what? You shouldn’t be expected to have more, if that’s the one set that you use to affect the world.
Barring only a few people with interdisciplinary interests, I would expect that the economists who are the best at predicting the stock market would answer “What’s a many-worlds interpretation?” to Eliezer’s question.
I think people who are subject matter experts are likely to have several correct contrarian beliefs in their subject area. This should provide at least some clustering. People would also tend to have clusters of correct beliefs in areas their friends were subject matter experts in.
“Robin previously posted (and I commented) on the notion of trying to distinguish correct contrarians by “outside indicators”—as I would put it, trying to distinguish correct contrarians, not by analyzing the details of their arguments, but by zooming way out and seeing what sort of general excuse they give for disagreeing with the establishment. As I said in the comments, I am generally pessimistic about the chances of success for this project”
I think the method that was taught in my family is better: become an expert on one or more subjects, so that you can know, by evaluating the evidence, which views are correct. Then, judge sources by their accuracy in those areas on which you are expert.
The method was not explicitly meant for contrarianism, but it works well there. Research some promising contrarian claims (perhaps those like diet & exercise which will most affect your life) so that you have pretty high confidence in whether they are correct. Then evaluate the accuracy of contrarians based on whether their claims agree with your research in those areas, and upweight the other things that those contrarians believe. Sure, you have to be smart enough to be a good evidence evaluator, but, hey, that sounds like us.
Bertrand Russell used this method successfully to assess the value of Hegel’s philosophy:
Unpopular essays, chap. 1
Upon further inspection, I’ve concluded something is seriously wrong here (especially if Russell had much of an impact in shaping later philosophers’ view of Hegel). In Introduction to Mathematical Philosophy, Russell claims Hegel’s knowledge of mathematics is out of date and that he believed calculus requires infinitesimals. This is totally wrong. The longest (or one of the longest) sections in his Science of Logic is an attempt at refuting the validity of infinitesimals (while still affirming the validity of the differential and integral calculus).
Will investigate further when I have the time.
This is not the only example of this sort of thing. Russell has a lot of examples like this where he clearly didn’t read the original sources and it suffers from this. There are similar issues where he bashes Aristotle for a lack of empiricism.
Did you manage to research this issue further? I’m curious.
Would philosophers of mathematics agree with physicists on the foundations of mathematics? If not, should they dismiss their views on physics?
I don’t think it’s as simple as ‘agreement = competent; disagreement = incompetent’, for at least a couple of reasons.
First, when judging the credibility of a source, their views on a given issue will be weighted according to the confidence with which they’re expressed (i.e. the source’s level of claimed expertise in that area). Second, disagreement will have more weight the closer the matter is to being one of settled objective fact.
I’m by no means an expert on the philosophy of mathematics, but I imagine that at the very least it’s an area in which thoughtful, intelligent, honest people can disagree, and at the most it’s one in which there simply isn’t a single set of correct answers. So disagreement need not seriously undermine one’s confidence in a source, but that doesn’t mean that all answers are equally sensible or valid, nor that Hegel can’t have been talking credibility-destroying nonsense.
Well near as I can tell >90% of mathematicians are Platonists.
When Russell writes that Hegel’s views on the philosophy of mathematics are “nonsense”, I take him to express more than mere disagreement, and something closer to an indictment of Hegel’s epistemic standards (such as standards of clarity, precision and cogency) as revealed in that area of inquiry. Furthermore, Hegel (I believe) claimed to be speaking as an expert in the field, whereas this may not be the case with the physicists speaking about the foundations of mathematics. So Russell’s conclusions about Hegel’s views in metaphysics seem to be more justified than the corresponding conclusions that the philosophers of mathematics would draw about the physicists in your example.
Hegel was a brilliant artist, though I would argue he lacked the strength of his artistic convictions. The fact that philosophy decided to imitate his artwork seems disappointing, but perhaps Plato (or Nietzsche, had he existed) should have led us to expect it.
If you only expect to find one empirically correct cluster of contrarian beliefs, then you will most likely find only one, regardless of what exists.
Treating this is as a clustering problem we can extract common clusters of beliefs from the general contrarian collection and determine degrees of empirical correctness. Presupposing a particular structure will introduce biases on the discoveries you can make.
Bertrand Russell applied this method successfully to assess the value of Hegel’s philosophy:
Unpopular essays, chap. 1
I’d like to give people quizzes to identify cognitive biases, perform SVD (or factor analysis) on the results, and see if the first dimension matches up with “liberal / conservative”.
PS—The technique referred to as SVD by the Netflix contestants is actually multiple linear regression. SVD uses principal component analysis, so that the first dimension is the dimension with greatest variance, etc. Simon Funk’s original algorithm uses gradient search to approximate SVD, but most Netflix contestants after Funk sped it up by using a gradient-search method that adjusts all of the basis vectors in parallel, without concentrating the variance in any one of them. Therefore, it’s more like multiple linear regression.
Clusters of opinion may be accidental, e.g. many lemmings follow Eliezer Yudkowsky who is correct on three topics and wrong on two. Or some other pundit. I think such accidental correlations will drown out whatever useful signal you were hoping to uncover by factor analysis. It’s a fishy endeavor anyway, smells like determining truth by popular vote spiced up with nifty math. What if all smart people start using your algorithm? You could get some nasty herd effects...
Don’t poll LWers using keys previously posted on by EY (or RH). That would just be silly.
While that would make it harder to distinguish between LW members, that doesn’t mean game over.
If we already expect LW members to be more correct, it still might be usefull to poll LW members about what views they have that are:
1) contrarian 2) on topics that most LW members haven’t thought about very hard 3) important
Using something along the lines of the Amanda Knox litmus test but with no previous posts on it, one presumes?
I thought the Amanda Knox test was fascinating, but mostly for the implications it had about rationality, not so much that the fact that this specific convict is in fact innocent.
Things like the shangri-la diet are closer to what I was thinking, since that has potentially huge consequences on its own.
The closet survey is also close to what I had in mind, with a little less emphasis on my #2. It’d also be interesting to see what happens if that that survey was done again, now that we have a better idea of what the shared beliefs are.
This endeavour is intended to reduce the fishiness of seeking truth by conforming to mainstream opinion (along the lines that Robin advocates). The process Eliezer suggests actually filters out a significant amount of the adverse positive feedback effect.
“Determining truth” has connotations with “certainty”, which is at odds with the fact that evidence here is assumed to be weak—something to prime attention, not imprint opinions.
(But I agree that the idea of getting any kind of useful conclusions/info from such a poll doesn’t seem realistic.)
Edit: after reformulating the method, I changed my mind.
Conclusions, no, but it sure might print out a fascinating list of things to investigate.
I expect that all “things to investigate” you’d find would’ve already been on the radar.
I don’t, especially if you let respondents suggest additional items and incorporated them. The CCC is large and includes things like (probably) the Shangri-La Diet.
Then the gain is not in turning attention to things considered wrong, but more to things that weren’t considered at all. High-quality memetic availability pool allowing to not waste time on false positives. Again, too dramatic an effect to get from a poll, and it’s unclear to what area should the finds be tuned. I’m not at all interested in know that cold fusion is real if counterfactually it is.
There might be a certain kind of people with high-quality suggestions on what novel things to investigate. (I expect these people have more to offer than lists of beliefs.) We have effective g-factor tests that allow to reliably find smart people in general population, and to construct groups of especially smart people. This post suggests that there might be a similarly effective way to estimate people’s rationality, a “cc-factor”, the ability to find and adopt correct beliefs even if they go against conventional wisdom, not necessarily as a skill, but as a predisposition. This may go a long way towards building the strength of the rationality cause.
I don’t think that disbelief in P-zombies belongs on this list. Or, it belongs on the list only in the sense in which Chalmers himself disbelieves in P-zombies. Chalmers doesn’t think that P-zombies might actually exist in the real world. Rather, he thinks that P-zombies could have existed had the universe been governed by different laws. In other words, his “belief” in P-zombies is an artifact of how he assigns truth-values to counterfactuals.
I agree that he makes these truth-value assignments in a wrong way. But he doesn’t really believe in P-zombies. He doesn’t believe that they are actual. He does believe that they are possible-but-not-actual. In general, the mistake that P-zombie believers make is a mistake about how to think about counterfactuals. Not making that mistake might be a “slam dunk”. If it’s not, then P-zombie disbelief isn’t either.
I also agree that Chalmers’s beliefs about P-zombies lead him to make certain actually-incorrect metaphysical assertions. But these assertions are wrong in the same sense that your belief in a subjective thread of consciousness is wrong. [ETA: I’m not saying that you think about counterfactuals wrongly. You take exactly the right approach to them in my book.]
There’s another possible belief, p-zombies-aren’t-possible-but-I’d-sure-like-to-know-why; that is, that while the existence of non-zombies proves the impossibility of any world with zombies, it is still possible to (counter-factually) conceive of an existence where it was the other way around. Though there would have been nobody to wonder about it.
I’ve talked to a number of apparent p-zombie believers who, under careful questioning, turn out to be asking this question instead. I’m pretty sure it’s not the same question.
A better method to anchor would be to use predictions about the future. How about finding 50contrarian predictions about what will happen in 2010 and use them as control for 50 other contrarian questions for which we will never have 100% sure answers?
The accuracy of the predictions could easily end up far more correlated with something other than predictor’s likely predictive accuracy on yet undetermined questions.
Example: I go back to mid 2000 and ask 50 Americans to make predictions about their economy, budget deficit, world standing etc of America 8 years later. The highest scorers will be pessimists, not necessarily rationalists making the best use of data available. The winner: someone who thought Al Gore was about to wreck the country.
That basically a problem that has to do with having questions where answers don’t correlate with each other. Lets say I ask whether you think that a company will win any of the XPrizes in 2010. I don’t necessarily think that the result of the answer correlates with optimisim and pessimism about the economy in that timeframe.
You can probably even statistically control for optimism/pessimism.
I think you’re right, if you managed to ask questions where accuracy at answering set 1 correlated very strongly with accuracy with set 2...
but didn’t correlate strongly with other factors, such as pessimism or politics...
and you manage to do that despite lots of uncertainty about those answering the questions (you’re still trying to find out about their beliefs, after all)...
then you win.
If you ask people whether they believe in atheism as Eliezer suggested that also has the problem of being correlated with political beliefs. It nearly entirely a question about what priors you have because there no other information on which you can reason.
We could make a contest of finding questions which results that don’t correlate with the other questions. Thanks to google docs quizzing people online is easy these days.
Eric Falkenstein wrote in 2016 that he converted to Christianity and doesn’t believe in evolution via natural selection. This doesn’t automatically mean that he was a crank in 1994, but to me it’s moderate evidence, plus a warning that we can’t trust people who have good excuses for contrarianism on some subject to have a good opinion on any other subject.
Hunch.com does this sort of data mining on their users, and they have lots of users. It seems it would be pretty easy for them to do this sort of analysis for questions raised in this post, much as they have with How Food Preferences Vary by Political Ideology and Mac vs PC People: Personality Traits & Aesthetic/Media Choices.
Can I ask for your sources or reasoning as to why “WTC explosives: no” is a slam-dunk?
At the moment the comment i’m replying to is at −1 karma.
Now, even if PlaidX is on the wrong side of a “slam-dunk” issue here, i question whether it’s right to downvote this considering that he’s really just asking for an explanation of someone’s reasoning.
I don’t question this at all. Downvotes signal that the community thinks that the posts in question are not worth considering. As Eliezer has said, 9/11 is a “slam dunk”, and I have no problem downvoting posts advocating ridiculous theories like creationism or conspiracy theories. Hopefully, this community agrees that 9/11 conspiracy theories are not worth considering.
However ridiculous creationism or conspiracy theories may be, it’s still useful to have a clear explanation of why they are ridiculous, if only for the sake of anyone who’s not completely up to date on the topic. For the other example you named, creationism, even if creationism is extremely silly it’s still useful to have a summary of why evolution is more reasonable.
For myself, i can say that i’m not a conspiracy theorist, but i haven’t really researched this topic, so i don’t have a justification for “why ‘WTC explosives: no’ is a slam-dunk” off the top of my head. So, the discussion resulting from PlaidX’s initial question has raised at least one good point that i hadn’t thought of before.
I think that the question is still valid as a way to get clarification on what constitutes a “slam dunk”. Eliezer seems to be using it in a way that doesn’t exactly mean “not worth considering”. He’s also trying to delineate certain psychological types.
I have no sources, so I’ll stick with the slam dunk prior until someone finds some.
A friend of mine once asked me my opinion on the 9/11 conspiracy theories, and I said I didn’t think there was much to them, and he said “What about WTC building 7? It collapsed at near free-fall speed into its own footprint, despite not being hit by a plane.” and I said “I’m sure you’re mistaken, but I’ll look into it.”
And so I looked into it, and… well, he wasn’t mistaken. A 47 story skyscraper collapsed at near free-fall speed into its own footprint, despite not being hit by a plane.
The FEMA report contains the following rube goldbergian explanation for the collapse:
Power to the Twin Towers was wired from the substation in WTC 7 through two separate systems. The first provided power throughout each building; the second provided power only to the emergency systems. In the event of fire, power would only be provided to the emergency systems. This was to prevent arcing electric lines igniting new fires and to reduce the risk of firefighters being electrocuted. There were also six 1,200 kW emergency power generators located in the sixth basement (B-6) level of the towers, which provided a backup power supply. These also had normal and emergency subsystems.
Previous to the collapse of the South Tower, the power to the towers was switched to the emergency subsystem to provide power for communications equipment, elevators, emergency lighting in corridors and stairwells, and fire pumps and safety for firefighters. At this time power was still provided by the WTC 7 substation.
Con Ed reported that “the feeders supplying power to WTC 7 were de-energized at 9:59 a.m.”. This was due to the South Tower collapse which occurred at the same time.
Unfortunately, even though the main power system for the towers was switched off and WTC 7 had been evacuated, a design flaw allowed generators (designed to supply backup power for the WTC complex) to start up and resume an unnecessary and unwanted power supply.
Unfortunately, debris from the collapse of the north tower (the closest tower) fell across the building known as World Trade Center Six, and then across Vesey Street, and then impacted WTC 7 which is (at closest) 355 feet away from the north tower.
Unfortunately, some of this debris penetrated the outer wall of WTC 7, smashed half way through the building, demolishing a concrete masonry wall (in the north half of the building) and then breached a fuel oil pipe that ran across the building just to the north of the masonry wall.
Unfortunately, though most of the falling debris was cold, it manages to start numerous fires in WTC 7.
Unfortunately, even with the outbreak of numerous fires in the building, no decision was made to turn off the generators now supplying electricity to WTC 7. Fortunately, for the firefighters, someone did make the decision not to fight and contain the fires while they were still small, but to wait until the fires were large and out of control. Otherwise, many firefighters may have been electrocuted while fighting the fires.
Unfortunately, the safety mechanism that should have shut down the fuel oil pumps (which were powered by electricity) upon the breaching of the fuel line, failed to work and fuel oil (diesel) was pumped from the Salomon Smith Barney tanks on the ground floor onto the 5th floor where it ignited. The pumps eventually emptied the tanks, pumping some 12,000 gallons in all.
Unfortunately, the sprinkler system of WTC 7 malfunctioned and did not extinguish the fires.
Unfortunately, the burning diesel heated trusses one and two to the point that they lost their structural integrity.
Unfortunately, this then (somehow) caused the whole building to collapse, even though before September 11, no steel framed skyscraper had ever collapsed due to fire.
The NIST report says that the failure of a single column near ground level led, first to a vertical progression of failures, causing the collapse of the East Penthouse, followed by a horizontal progression of failures leading to the near-simultainious collapse of all of the building’s 27 core columns.
The official 9/11 commission report, in its 568 pages, does not mention building 7 at all.
That explanation seems a lot less Rube Goldbergian than a sinister conspiracy rigging a side building that wasn’t hit by a plane with explosives. What on Earth would have been the point? Which of the conspiracy’s goals will fail to be achieved if building 7 does not fall down? All you’re doing here is learning a valuable lesson about the ability of conspiracy theorists to present evidence that looks around that convincing in favor of anything. Recalibrate your sensors for how much evidence something which looks “around that convincing” is.
Well, building 7 was insured for hundreds of millions of dollars.
In addition, building 7 housed documents relating to numerous SEC investigations. The files for approximately three to four thousand cases were destroyed, according to the Los Angeles Times.
So some shadowing group kills two thousand people, arranges planes to get flown into the WTC towers, the Pentagon, and the middle of Pennsylvania and does hundreds of billions in damage to the economy to pick up an insurance check… when the building was on some of the most expensive real estate in the world? Or to destroy evidence the SEC had? Is that how you would do it? Really?
Eliezer Yudkowsky has requested that further discussion on this subject be moved to the new 9/11 conspiracy topic he made, over here.
And it was worth hundreds of millions of dollars, too. In a word, pffft.
Those events are a priori unlikely but given that WTC 7 did in fact fall down the above seems as likely a sequence as any. Certainly more likely that a conspiracy.
Did someone explain the sequence of events that led to the building falling by positing a design flaw or has the existence of the design flaw been confirmed independently? It doesn’t really matter but it would be interesting to know. I have the same question re: other mechanical failures and design issues.
The remaining sequence of events seems basically plausible given unique circumstances and an uncoordinated response (which was justifiably focused on the towers). And the rest is just noise- in the exact same way the weird facts about glass on clothing, washing machines and mops are noise in the Knox case.
Actually, what we have here is considerably worse than the case against Knox. At least the Knox prosecutors are able to tell a story consistent with the facts in which Knox is guilty. Here we are expected to believe there was a conspiracy without having any idea how such a conspiracy could have happened. There is no plausible motive given the kind of coordination that would have been necessary. No explanation for how so many people were kept quiet. There isn’t even a suspect! Just something seemingly improbable and a lot of hand waving. Knox and Sallecito’s prosecutors were privileging the hypothesis, here we don’t even have a hypothesis.
I’ve addressed the motive in another subthread.
As to the design flaw, yes, it’s hypothetical, as is the debris falling across the street and through the concrete wall in the middle of the building, as is the fuel system even having any fuel in it, etc.
I certainly sympathize with your complaint about noise as it applies to conspiracy theories in general, this is indeed problem #1. Massive, massive amounts of red herrings. I think this summary is fairly clean of it, but if you have specific complaints I’d be happy to hear them.
I agree that the chain seems convoluted, but do we really have a baseline for what is plausible when airplanes start flying into buildings in a dense urban area?
well, it’s not like WTC was hit by any of those airplanes, but I suppose one might argue for a certain “the world has gone topsy-turvy” latitude in explanation. How this additional uncertainty results in “no explosives” being a “slam dunk”, I’m not sure.
http://www.youtube.com/watch?v=Atbrn4k55lA
It certainly LOOKS like a controlled demolition.
(Apologies if this is the same question that gets asked in every thread of this kind; I freely admit to not having researched this.)
What motive would the conspirators have for demolishing WTC7 with explosives? If they wanted to start a war or increase wiretapping or get Bush re-elected, or whatever the motive was, flying planes into the towers was enough. Blowing up WTC7, and especially blowing up WTC7 without arranging a plausible explanation (like “a plane flew into it”, as they did with the towers) seems careless and unnecessary—out of character for a group of people careful and competent enough to arrange 9/11 and get away with it.
This is a good question, I’ve replied to yudkowsky’s rather more inflammatory version of it above.
For the record, it’s simply not true that fires never cause steel buildings to collapse.
The only total collapse due to fire in that PDF that I see is a 19 story concrete Russian apartment block. That and the buildings from 9/11.
What irks me about this is that you probably don’t know what an uncontrolled demolition brought on by massive pieces of falling building, thousands of gallons of rushing diesel fuel, and apparently unstable electric conditions ought to look like. I certainly don’t.
I expect it would look like the building FALLING OVER, among other things. Making a building fall straight down into its own footprint is actually quite tricky. Buildings are designed to stay in one piece.
Well then why wouldn’t they plant explosives in such a way as to make the building FALL OVER?
Seriously, spend like 5 seconds figuring out what we’re likely to reply before you post.
Off the top of my head, pulverizing the buildings into small pieces allows for a much more complete destruction of evidence than simply tipping them over would have. After building seven “fell down”, the rubble was quickly shipped off to blast furnaces, ironically under the supervision of a company called “Controlled Demolition Inc.”
Evidence of how the alleged demolition was accomplished is best eliminated by demolishing the building?
Ironically, what you find to be an ironic coincidence sends the signal that you’re inappropriately excited by cute but totally non-causal coincidences.
EDIT: Whoops, forgot we were supposed to be discussing this on the other page.
Reply is now here.
They’re designed to stay in one piece under normal conditions, and predictable disaster conditions. Clearly this wasn’t one of those, but you expect the same thing to happen?
Given that that’s what happens in failed controlled demolitions, yeah, I do.
http://www.youtube.com/watch?v=ZwGE92upfQM
http://www.youtube.com/watch?v=UsePUn5-88c
Wait, what? Neither of those tipped like you said you would expect.
And failed controlled demolitions are not unprecedented disaster conditions, but I suspect this discussion is not worth having.
I meant that they stayed in one piece, as per your objection. No, they did not fall over, but then these have had their lower floors taken out symmetrically. Presumably a natural disaster would not be as forgiving.
Such material should come with a link to an official source. Right now, I lack the motivation to research on my own, but until I see a confirmation, I can’t exclude a hypothesis that the above text was concocted by a conspiracy theorist.
The summaries certainly were written by conspiracy theorists.
Here’s the FEMA report
And this is the NIST report
I’m not sure how to source something not mentioning something, but this is the official 9/11 commission report, if you’d like to not read about building 7 at all.
This link is to a conspiracy theory site, with inline comments by conspiracy theorists! The link to the original document at the end of the page doesn’t work.
This is a news piece about the report, not the report itself. It contains a link to the report. In the report, page 33 begins the executive summary. As opposed to the summary you posted, it’s isn’t optimized for sounding ridiculous, while acknowledging the fact that similar fires never collapsed similar buildings before, etc.
I would like to reply to you, but Eliezer Yudkowsky has requested that further discussion on this subject be moved to the new 9/11 conspiracy topic he made in order to mock it in more detail, over here.
Please repost your comment there, I’m on thin ice and I don’t want to break the rules by replying to you in an unsuitable zone.
I’m not going to repost my comment, but you are welcome to reply on the other thread.
Eliezer has implied in the past that he trusts “domain experts” implicitly. My guess would be that “WTC explosives: no” is a slam-dunk because there is a consensus on the topic among all the qualified engineers (to my knowledge).
Or else he’s part of the conspiracy.
Please move further conversation on this topic to the actual post for it, and I should mention that I don’t see a good reason for there to be any other posts on the topic on this blog.
EDIT: Not implying that conversation occurring before this comment was blameworthy.
My apologies. In my defense, you wrote the “actual post” ten hours after I wrote that comment, and everyone arguing with me, including you, is doing so here.
http://lesswrong.com/lw/1kj/the_911_metatruther_conspiracy_theory/
Did you just use your future self as a source?
What’s the difference between a contrarian and a crackpot?
I am a bold thinker, you are a contrarian, he/she is a crackpot.
The degree of disrespect the speaker desires to convey to the labelled individual.
Crackpots are wrong contrarians.
(Not really. There’s this whole weird complex of personality traits that goes along with that.)
Someone who’s correctly contrarian on one issue is likely to be a crackpot on another. You hope that, by averaging contrarians together, the crackpot opinions will be averaged out.
“I’ve never heard of any surveys like this actually being done, but it sounds like quite an interesting dataset to have, if it could be obtained.”
This would be a really fun dataset! See how many dimensions it reduces to and what the bases are.
Yes, collect data! You might even be able to make common cause with contrarians you disagree with in the collection of this data.
Short heuristic:
If you disagree with James Randi on many things about which he is outspoken, you’re probably crazy. ;)
Why James Randi is not on my list.
Maybe Joe Nickell is a better representative of the skeptic community, then?
Maybe we could agree that belief in too many of these probably means you’re crazy? ;)
The community currently going under the name “skeptics” usually attacks easy targets that are already unpopular with the intelligentsia, like homeopathy. Let’s see what Joe Nickell thinks about many-worlds first. Shermer and Penn & Teller have failed similar tests.
EDIT: Being a skeptic is just as easy (in fact, the opposite) of being a contrarian, and the test of whether a skeptic’s cognition provides bayes-fuel is whether they fail to critique contrarian theories that are correct. This deserves a post which I might or might not have time to do.
I think Richard Dawkins passes the many-worlds test (8:36), at least if you allow for characteristic British understatement and a lack of training in physics.
Good for him!
Actually, this considerably increases my respect for Dawkins as a general rationalist and causes me to considerably bump the probability that someone from SIAI should try contacting him. I’ll forward your comment to Vassar.
Already in progress.
I’d be interested in knowing how you go about contacting and communicating with someone like Richard Dawkins, i.e. a good rationalist whose only knowledge of the Singularity probably comes from listening to one of Kurzweil’s talks. Actually, I’d like to read your e-mail to him, but that may be asking too much. :)
So how did this work out?
A couple years of ‘yes’ without firm commitments. Not holding my breath.
If being a skeptic is the opposite of being a contrarian, your three “slam dunks” won’t distinguish very well—unless you’re assuming we’ve already established the person is a contrarian? Many-worlds seems to be pretty mainstream these days. And as for atheism and P-zombies, doesn’t naturalism/materialism generally go along with skepticism? I think this forces the question of just who you’re talking about being contrary to.
It’s so hard to find good slam-dunks these days.
This is an old thread, so I probably won’t get a response, but I’m just curious: could you clarify what issues you think Shermer and P&T got wrong? Are you just referring to the cryonics thing with the latter? Or something else too?
I see it lists memetics, hypnosis and subliminal perception as pseudoscience. I’d put >50% on each of these being a real phenomenon.
I think for areas like these we should distinguish between believing a popular myth (eg. Hypnotized assassins, James Vicary’s “Eat Popcorn”) versus believing the phenomenon exists at all.
Well, that’s why I added the qualifier “too many”.
I defy the data, and raise you one counter-argument.
I’m not saying I believe in the Mars effect, I’m saying that it looks to me like CSICOP found it more important to refute the enemy position than to behave cleanly throughout. Is that data worth defiance?
Jim Lippard reviewed the whole affair and concluded that CSICOP had transgressed; I found the review convincing.
The term “contrarian” is rather vague about who the disagreements are with.
There are many optical illusions, etc, where what most people think is wrong. The key thing is not simply disagreeing with a majority, but disagreeing with experts in the area—and not just any experts (lest we recognise priests as authorities on theology) - real experts.
And how again does one do this? Why not just be contrarian about the object level arguments rather than engaging in an infinite regress and being contrarian about who are the experts on who are the experts on who are the experts...?
It is complicated. However, I regularly make decisions about who are the experts on a given topic—and the heuristics I use have some value, and don’t involve an infinite regress.
The issue is not about the topic the disagreement is about, but over who the disagreement is with. Believing things contrary to the beliefs of a simple majority is commonplace—and not necessarily a sign of problems. Most people are ignorant, stupid and biased.
It makes sense, in informal contexts, to espouse a contrarian view, particularly if it is the opposite to what you actually hold to be true and the issue at hand is frequently disputed. In doing so, one can strengthen the future presentation of one’s own real position using information or strategies garnered from your interlocutors’ responses (assuming that a contrary discussion elicits more of potential value than one conducted in agreement).
In short: argue against yourself when it doesn’t matter in order better to argue for yourself when it does.
There can be plenty of signalling reasons to publicly support contrarian positions. For example, they can help generate discussion, make it seem that you know something which most people do not, help solicit support from those who support minorities—and so on.
A decade late to the party, I’d like to join those skeptical of EY’s use of many-worlds as a slam-dunk test of contrarian correctness. Without going into the physics (for which I’m unqualified), I have to make the obvious general objection that it is sophomoric for an amateur in an intellectual field—even an extremely intelligent and knowledgeable one—to claim a better understanding than those who have spent years studying it professionally. It is of course possible for an amateur to have an insight professionals have missed, but very rare.
I had a similar feeling on reading EY’s Inadequate Equilibria, where I was far from convinced by his example that an amateur can adjudicate between an economics blogger and central bankers and tell who is right. (EY’s argument that the central bankers may have perverse incentives to give a dishonest answer is not that strong, since they may give an honest answer anyway, and that fact that with 20-20 hindsight it might look like they were wrong just shows that economics is an inexact science.) The economics blogger may make points that seem highly plausible and convincing to an amateur, but then, one-sided arguments often do.
Back to physics, any amateur who says “many-worlds is just obvious if you understand it, so those who say otherwise are obviously wrong” is claiming a better understanding than many professionals in the field; again backed with allegations of perverse incentives. Though the latter carry some weight, I’d put my money on the amateur just being overconfident, and having missed something.
If anything I’d judge people on the sophistication of their reasons rather than the opinion itself. E.g. I’d take more notice of someone who had a sophisticated reason for denying that 1 + 1 = 2 than someone who said ‘it’s just obvious, and anyone who says otherwise is an idiot’.
(I for one have doubts that 1 + 1 = 2; the most I’d be prepared to say is that 1 + 1 often equals 2. And I’m in good company here—e.g. Wittgenstein had grave doubts that simple counting and addition (and indeed any following of rules) are determinate processes with definite results, something which he discussed in seminars with Alan Turing among other students of his.)
The one kind of case in which I’d prefer the factual opinion of a sophisticated amateur to a professional is in fields which don’t involve enough intellectual rigour. For example I’d rather believe an amateur with an advanced understanding of evolutionary psychology than some gender studies professors to give a correct explanation of certain social phenomena; not just because the professors may well have an ideological axe to grind, but also because they may lack the scientific rigour necessary to understand the subtleties of causation and statistics.
You’re leaning heavily on the concept “amateur”, which (a) doesn’t distinguish “What’s your level of knowledge and experience with X?” and “Is X your day job?”, and (b) treats people as being generically “good” or “bad” at extremely broad and vague categories of proposition like “propositions about quantum physics” or “propositions about macroeconomics”.
I think (b) is the main mistake you’re making in the quantum physics case. Eliezer isn’t claiming “I’m better at quantum physics than professionals”. He’s claiming that the specific assertion “reifying quantum amplitudes (in the absence of evidence against collapse/agnosticism/nonrealism) violates Ockham’s Razor because it adds ‘stuff’ to the universe” is false, and that a lot of quantum physicists have misunderstood this because their training is in quantum physics, not in algorithmic information theory or formal epistemology.
I think (a) is the main mistake you’re making in the economics case. Eliezer is basically claiming to understand macroeconomics better than key decisionmakers at the Bank of Japan, but based on the results, I think he was just correct about that. As far as I can tell, Eliezer is just really good at economic reasoning, even though it’s not his day job. Cf. Central banks should have listened to Eliezer Yudkowsky (or 1, 2, 3).
Eliezer’s econ case is based on reading Scott Sumner’s blog, so it’s not very informative that Sumner praises Eliezer (3 out of 4 endorsements you linked, the remaining one is anon).
bfinn was discounting Eliezer for being a non-economist, rather than discounting Sumner for being insufficiently mainstream; and bfinn was skeptical in particular that Eliezer understood NGDP targeting well enough to criticize the Bank of Japan. So Sumner seems unusually relevant here, and I’d expect him to pick up on more errors from someone talking at length about his area of specialization.
First, thanks for your comments on my comments, which I thought no-one would read on such an old article!
Re your quantum physics point, with unusual topics like this that overlap with philosophy (specifically metaphysics), it is true that physicists can be out of their depth on that part of it, and so someone with a strong understanding of metaphysics (even if not a professional philosopher as such) can point out errors in the physicists’ metaphysics. That said, saying X is clearly wrong (due to faulty metaphysics) is a weaker claim than that Y is clearly right, particularly if there are many competing views. (As there are AFAIK even in the philosophy of QM.) Just as a professional physicist can’t be certain about getting the metaphysics bit of QM right, even a professional philosopher couldn’t be certain about the physics bit of it; not certain enough to claim a slam-dunk. So without going into the specifics of the case (which I’m not qualified to do) it still seems like an overreach.
Also, more generally, I assume interdisciplinary topics like this (for which a highly knowledgeable amateur could spot flaws in the reasoning of someone who’s a professional in one discipline but not the other) are the exception rather than the rule.
Re the economics case, well, for all I know, EY may well have been right in this case (and for the right reasons), but if so then it’s just a rare example of an amateur who has a very high professional-level understanding of a particular topic (though presumably not of various other parts of economics). I.e. this is an exception.
That said, and without going into the fine details of the case, the professionals here presumably include the top macroeconomists in Japan. Is it really plausible that EY understands the relevant economics and knows more relevant information than them? (E.g. they may well have considered all kinds of facts & figures that aren’t public or at least known to EY.) Which is presumably where the issue of other biases/influences on them would come in; and while I accept that there could be personal/political biases/reasons for doing the economically wrong thing, this can be too easy a way of dismissing expert opinion.
So I’d still put my money on the professional vs the amateur, however persuasive the latter’s arguments might seem to me. And again, the fact that the Bank of Japan’s decision turned out badly may just show that economics is an inexact science, in which correct bets can turn out badly and incorrect bets turn out well.
One other exception I’d like to add to my original comment: it is certainly true that a highly expert professional in a field can be very inexpert in topics that are close to but not within their own specialism. (I know of this in my own case, and have observed it in others, e.g. lawyers. E.g. a corporate lawyer may only have a sketchy understanding of IP law. Though they are well aware of this.)
You should also take into account that Eliezer seems to have been right, as an “amateur” AI researcher, about AI alignment being a big deal.
The alignment problem is arguably another example, like my above response re quantum physics, of a field spilling over into philosophy, such that even a strong amateur philosopher can point things out that the AI professionals hadn’t thought through. I.e. it shows that AI alignment is an interdisciplinary topic which (I assume) went beyond existing mainstream AI.
Huh? Strong evidence for that would be us all being dead. Or did you just mean that some people in the field agree with him?
I want to insist that “it’s unreasonable to strongly update about technological risks until we’re all dead” is not a great heuristic for evaluating GCRs.
The latter has come to be true, in no small part as a result of his writing. This implies that there was indeed something academics were missing about alignment.
Only a minority agree with him. Any number of (contradictory!) ideas will “seem to be right” if the criterion is only that some people agree with them.
A sizable shift has occurred because of him, which is different than your interpretation of my position. If you’re convincing Stuart Russell, who is convincing Turing award winners like Yoshua Bengio and Judea Pearl, then there was something that wasn’t considered.
I am somewhat surprised that Free Will, which was assigned as first exercise in reductionism, is not up there instead of MWI or P-Z; even if conclusions in these areas are as clear, they are further up the inferential ladder (unless that in itself is part of the “test”, not sure why it would be)
Can I humbly suggest that a tool along the lines of the one proposed here:
http://lesswrong.com/lw/2rw/proposal_for_a_structured_agreement_tool/
might be useful for the purpose?
So are you saying there isn’t a world where Jesus is Batman?
I happen to know a few guys, religious, that have maid the Many-Worlds God Argument. Since, all possible worlds exist, therefore it means that in some world God exists. Since, God is Omnipotent so he rules our world too.
What’s our obsession with finding people who are more likely to be right supporting us? Is it for validation, or so we can leach off their authority? One of the saddest psychiatric phenomenon is the association between insight into psychotic symptoms and depression as well as suicide. The earlier the insight, the higher the tendency towards suicide. Would you really want to surround yourself with people whose insight into the state of reality, into the state of your mind, and by consequence, people who will give you greater insight to yourself, perhaps beyond practical advancement?