What is bunk?
Related: http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/, http://lesswrong.com/lw/1mh/that_magical_click/, http://lesswrong.com/lw/18b/reason_as_memetic_immune_disorder/
Given a claim, and assuming that its truth or falsehood would be important to you, how do you decide if it’s worth investigating? How do you identify “bunk” or “crackpot” ideas?
Here are some examples to give an idea.
“Here’s a perpetual motion machine”: bunk. “I’ve found an elementary proof of Fermat’s Last Theorem”: bunk. “9-11 was an inside job”: bunk.
“Humans did not cause global warming”: possibly bunk, but I’m not sure. “The Singularity will come within 100 years”: possibly bunk, but I’m not sure. “The economic system is close to collapse”: possibly bunk, but I’m not sure.
“There is a genetic difference in IQ between races”: I think it’s probably false, but not quite bunk. “Geoengineering would be effective in mitigating global warming”: I think it’s probably false, but not quite bunk.
(These are my own examples. They’re meant to be illustrative, not definitive. I imagine that some people here will think “But that’s obviously not bunk!” Sure, but you probably can think of some claim that *you* consider bunk.)
A few notes of clarification: I’m only examining factual, not normative, claims. I also am not looking at well established claims (say, special relativity) which are obviously not bunk. Neither am I looking at claims where it’s easy to pull data that obviously refutes them. (For example, “There are 10 people in the US population.”) I’m concerned with claims that look unlikely, but not impossible. Also, “Is this bunk?” is not the same question as “Is this true?” A hypothesis can turn out to be false without being bunk (for example, the claim that geological formations were created by gradual processes. That was a respectable position for 19th century geologists to take, and a claim worth investigating, even if subsequent evidence did show it to be false.) The question “Is this bunk?” arises when someone makes an unlikely-sounding claim, but I don’t actually have the knowledge right now to effectively refute it, and I want to know if the claim is a legitimate subject of inquiry or the work of a conspiracy theory/hoax/cult/crackpot. In other words, is it a scientific or a pseudoscientific hypothesis? Or, in practical terms, is it worth it for me or anybody else to investigate it?
This is an important question, and especially to this community. People involved in artificial intelligence or the Singularity or existential risk are on the edge of the scientific mainstream and it’s particularly crucial to distinguish an interesting hypothesis from a bunk one. Distinguishing an innovator from a crackpot is vital in fields where there are both innovators and crackpots.
I claim bunk exists. That is, there are claims so cracked that they aren’t worth investigating. “I was abducted by aliens” has such a low prior that I’m not even going to go check up on the details—I’m simply going to assume the alleged alien abductee is a fraud or nut. Free speech and scientific freedom do not require us to spend resources investigating every conceivable claim. Some claims are so likely to be nonsense that, given limited resources, we can justifiably dismiss them.
But how do we determine what’s likely to be nonsense? “I know it when I see it” is a pretty bad guide.
First idea: check if the proposer uses the techniques of rationality and science. Does he support claims with evidence? Does he share data and invite others to reproduce his experiments? Are there internal inconsistencies and logical fallacies in his claim? Does he appeal to dogma or authority? If there are features in the hypothesis itself that mark it as pseudoscience, then it’s safely dismissed; no need to look further.
But what if there aren’t such clear warning signs? Our gracious host Eliezer Yudkowsky, for example, does not display those kinds of obvious tip-offs of pseudoscience—he doesn’t ask people to take things on faith, he’s very alert to fallacies in reasoning, and so on. And yet he’s making an extraordinary claim (the likelihood of the Singularity), a claim I do not have the background to evaluate, but a claim that seems implausible. What now? Is this bunk?
A key thing to consider is the role of the “mainstream.” When a claim is out of the mainstream, are you justified in moving it closer to the bunk file? There are three camps I have in mind, who are outside the academic mainstream, but not obviously (to me) dismissed as bunk: global warming skeptics, Austrian economists, and singularitarians. As far as I can tell, the best representatives of these schools don’t commit the kinds of fallacies and bad arguments of the typical pseudoscientist. How much should we be troubled, though, by the fact that most scientists of their disciplines shun them? Perhaps it’s only reasonable to give some weight to that fact.
Or is it? If all the scientists themselves are simply making their judgments based on how mainstream the outsiders are, then “mainstream” status doesn’t confer any information. The reason you listen to academic scientists is that you expect that at least some of them have investigated the claim themselves. We need some fraction of respected scientists—even a small fraction—who are crazy enough to engage even with potentially crackpot theories, if only to debunk them. But when they do that, don’t they risk being considered crackpots themselves? This is some version of “Tolerate tolerance.” If you refuse to trust anybody who even considers seriously a crackpot theory, then you lose the basis on which you reject that crackpot theory.
So the question “What is bunk?”, that is, the question, “What is likely enough to be worth investigating?”, apparently destroys itself. You can only tell if a claim is unlikely by doing a little investigation. It’s probably a reflexive process: when you do a little investigation, if it’s starting to look more and more like the claim is false, you can quit, but if it’s the opposite, then the claim is probably worth even more investigation.
The thing is, we all have different thresholds for what captures our attention and motivates us to investigate further. Some people are willing to do a quick Google search when somebody makes an extraordinary claim; some won’t bother; some will go even further and do extensive research. When we check the consensus to see if a claim is considered bunk, we’re acting on the hope that somebody has a lower threshold for investigation than we do. We hope that some poor dogged sap has spent hours diligently refuting 9-11 truthers so that we don’t have to. From an economic perspective, this is an enormous free-rider problem, though—who wants to be that poor dogged sap? The hope is that somebody, somewhere, in the human population is always inquiring enough to do at least a little preliminary investigation. We should thank the poor dogged saps of the world. We should create more incentives to be a poor dogged sap. Because if we don’t have enough of them, we’re going to be very mistaken when we think “Well, this wasn’t important enough for anyone to investigate, so it must be bunk.”
(N.B. I am aware that many climate scientists are being “poor dogged saps” by communicating with and attempting to refute global warming skeptics. I’m not aware if there are economists who bother trying to refute Austrian economics, or if there are electrical engineers and computer scientists who spend time being Singularity skeptics.)
- Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields by 15 Feb 2011 9:17 UTC; 100 points) (
- Is my view contrarian? by 11 Mar 2014 17:42 UTC; 33 points) (
- 14 Feb 2016 11:05 UTC; 8 points) 's comment on Why and how to assess expertise by (EA Forum;
- 3 Sep 2012 22:31 UTC; 4 points) 's comment on How to tell apart science from pseudo-science in a field you don’t know ? by (
- 9 Nov 2011 3:28 UTC; 2 points) 's comment on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics by (
SarahC:
An important point here is that the intellectual standards of the academic mainstream differ greatly between various fields. Thus, depending on the area we’re talking about, the fact that a view is out of the mainstream may imply that it’s bunk with near-certainty, but it may also tell us nothing if the mainstream standards in the area are especially bad.
From my own observations of research literature in various fields and the way academia operates, I have concluded that healthy areas where the mainstream employs very high intellectual standards of rigor, honesty, and judicious open-mindedness are normally characterized by two conditions:
(1) There is lots of low-hanging fruit available, in the sense of research goals that are both interesting and doable, so that there are clear paths to quality work, which makes it unnecessary to invent bullshit instead.
(2) There are no incentives to invent bullshit for political or ideological reasons.
As soon as either of these conditions doesn’t hold in an academic area, the mainstream will become infested with worthless bullshit work to at least some degree. For example, condition (2) is true for theoretical physics, but in many of its subfields, condition (1) no longer holds. Thus we get things like the Bogdanoff affair and the string theory wars—regardless of who (if anyone) is right in these controversies, it’s obvious that some bullshit work has infiltrated the mainstream. Nevertheless, the scenario where condition (1) doesn’t hold, but (2) does is relatively benign, and such areas are typically still basically sound despite the partial infestation.
The real trouble starts when condition (2) doesn’t hold. Even if (1) still holds, the field will be in a hopeless confusion where it’s hardly possible to separate bullshit from quality work. For example, in the fields that involve human sociobiology and behavioral genetics, particularly those that touch on the IQ controversies, there are tons of interesting study ideas waiting to be done. Yet, because of the ideological pressures and prejudices—both individual and institutional—bullshit work multiplies without end. (Again, regardless of whom you support in these controversies, it’s logically impossible that at least one side isn’t bullshitting.) Thus, on the whole, condition (2) is even more critical than (1).
When neither (1) nor (2) holds in some academic field, it tends to become almost pure bullshit. Macroeconomics is the prime example.
SarahC:
So, to apply my above criteria to these cases:
Climate science is politicized to an extreme degree and plagued by vast methodological difficulties. (Just think about the difficulty of measuring global annual average temperature with 0.1C accuracy even in the present, let alone reconstructing it far into the past.) Thus, I’d expect a very high level of bullshit infestation in its mainstream, so critics scorned by the mainstream should definitely not be dismissed out of hand.
Ditto for mainstream vs. Austrian macroeconomics; in fact, even more so. If you look at the blogs of prominent macroeconomists, you’ll see lots of ideologically motivated mutual scorn and abuse even within the respectable mainstream. Austrians basically call bullshit on the entire mainstream, saying that the whole idea of trying to study economic aggregates by aping physics is a fundamentally unsound cargo-cult approach, so they’re hated by everyone. While Austrians have their own dubious (and sometimes obviously bunk) ideas, their criticism of the mainstream should definitely be taken into account considering its extreme level of politicization and lack of any clearly sound methodology.
As for singularitarians, they don’t really face opposition from some concrete mainstream academic group. The problem is that their claims run afoul of the human weirdness heuristic, so it’s hard to get people to consider their arguments seriously. (The attempts at sensationalist punditry by some authors associated with the idea don’t help either.) But my impression is that many prominent academics in the relevant fields who have taken the time to listen to the singularity arguments take them respectfully and seriously, certainly with nothing like the scorn heaped on dissenters and outsiders in heavily politicized fields.
If it’s not presumptuous of me, I’d like the Bogdanov affair removed as an example. I was one of the Wikipedia administrators deeply involved in the BA edit-wars on Wikipedia, and while I originally came to it with an open mind (why I was asked to intervene), quickly there came to be not a single doubt in my mind that the brothers were complete con artists and possess only a talent for self-promotion and media manipulation.
This is unlike string theory, where there are good arguments on both sides and one could genuinely be uncertain.
However, would you agree that Bogdanoff brothers’ work has been, at least at some points, approved and positively reviewed by credentialed physicists with official and reputable academic affiliations? After all, they successfully published several papers and defended their theses.
Now, it may be that after their work came under intense public scrutiny, it was shown to be unsound so convincingly that it led some of these reviewers to publicly reverse their previous judgments. However, considering that the overwhelming majority of research work never comes under any additional scrutiny beyond the basic peer review and thesis defense procedures, this still seems to me like powerful evidence that the quality of many lower-profile publications in the field could easily be as bad.
As I recall, they didn’t defend their theses, and only eventually got their degrees by a number of questionable devices like replacing a thesis with publications somewhere and forcing a shift to an entirely different field like mathematics.
EDIT: The oddities of their theses is covered in http://en.wikipedia.org/wiki/Bogdanoff_affair#Origin_of_the_affair
Very articulate comment, it helped clarify my thinking on this topic; thanks.
For me the primary evidence of a bunk claim is when the claimant fails to reasonably deal with the mainstream. Let’s take the creation evolution debate. If someone comes along claiming a creationist position, but is completely unable to even describe what the evolutionary position is, or what might be good about it, then their idea is bunk. If someone is very good at explaining evolution as it really happens, but then goes on to claim something different can happen as well—then it becomes interesting.
Anyone proposing an alternative idea needs to know precisely what it is an alternative to—otherwise they haven’t done their homework, and it isn’t worth my time.
Yes! This is a key point in the Alternative-Science Respectability Checklist, for example:
Replace “creationist” and “evolutionary” in that sentence with “atheist” and “religious” respectively and you have the most common theist criticism of Dawkins.
Therefore, since theism is more-or-less the mainstream position, wouldn’t following your rule force you to conclude that Dawkins’ atheism is bunk?
Sorry to take a while to look at this.
It would. I’m aware of what Dawkins has said about this—that one doesn’t neeed to be an expert on fairies in order to conclude that they don’t exist, and that this ought to apply to Gods as well. This is fair enough.
It’s a rule of argument. If someone doesn’t want to learn about fairies, that’s their own concern. But if they want to persuade some other people who do believe in the fairies, they ought to take the time to learn enough about what those people say about fairies to plug into their world.
Theories are like languages, I think. If someone has a mental vocabulary which involves fairies, you will more easily persuade them if you can use the language too.
What too often happens is that a critic doesn’t learn the other person’s language. They then end up misrepresenting what the other party believes, and to follow that up, they tell them that their first step to knowledge is to throw away a language that they find useful in favour of a different one that they’ve never used. They then go on to make arguments to which they have no idea how I’m going to respond. As a persuasion strategy, this is a non-starter.
I’m not at all saying all theories/languages are equal, some are far better than others. But if you want to persuade an outsider, learning their language is only courteous, and gives you a huge advantage. You learn where the real problems of the other belief system are. You discover what it does successfully explain. You discover how to partially express your beliefs in their system, which makes it easier for them to accept and test what you’re saying.
My original point is that, as an optimisation, you can immediately reject any arguer who hasn’t realised that they need to talk the language of their hearers.
It does explain why Dawkin’s book has resulted in more heat than light. Reading it, Dawkin’s book can be summarised as saying “Your theism seems completely ridiculous, for all these reasons. I don’t know how you believe it.” The reply has more or less been “Yes, I can see that you don’t know how we believe it. Perhaps if you did know that, you would have written a better book.”
In the case of the theists, this is in fact quite difficult. I am actually a Christian myself so I can give you the inside track here. The formal ‘arguments’ that Christians put forward for believing in God actually have little to do with the real reasons that they do believe. Christians in particular are persuaded principally by the part of their mind that deals with relationships and morality—they believe God is so much more morally right than anything else they’ve heard about that only his existence explains the improvement. It’s not based on the faculty of reason, though, it persuades through the interpersonal brain. Reason comes later, and actually maps quite poorly onto what the essence of their faith is for the most part. The outcome is what Dawkins observes—you can give those religious arguments a terrible intellectual thrashing, and it makes very little difference to what they believe afterwards. He’s not really speaking their language.
Another example of this is the opposite phenomenon—Christians, speaking in their own language that doesn’t connect to atheists, say they don’t believe in atheism because it’s immoral. If you can imagine how likely such an argument is to impress Richard Dawkins, you have perhaps a mirror image of why his argument doesn’t impress the theists.
Trying to express yourself in the language of the other party makes a huge difference. Let’s take as an example Max Tegmark’s mathematical universe hypothesis—the ultimate ensemble theory. Suppose he’s right about part of this—that everything that’s mathematically rationally describable ‘exists’ - whatever that is. But suppose he’s wrong about the other part—that this forms an ensemble. Suppose instead it forms a network—that everything that exists is actually a conceptual network of interconnected mathematical concepts—an ultimate network of ideas. Not inherently implausible—it certainly is about the simplest possible description of what might exist that I can think of.
It also is pretty much a description of an omniscient God. But if I’d just talked about that in my language, it wouldn’t have been as interesting.
Atheism isn’t really Dawkin’s main focus either. Dawkins isn’t primarily against God - he’s against faith. He believes people ought to believe things for rational reasons, not because they ought to, or that they’ll be saved eternally if they do. Where Dawkins is most readable is when he’s talking about ‘survival machines’, or the way each individual gene in an organism is out to optimise its own survival in its own way—whether that’s good for the organism / species or not. Rationality and reason are his first love—atheism is more of a consequence of this than a belief in its own right. I very much share his view that people ought to believe things for rational reasons. Dawkins knows what systemic understanding is—I find the creationists assertion that there is no such thing—both in their arguments for rejecting science and in their own woeful lack of a systemic alternative view—very disturbing. I’d rather read Dawkins any day.
Why is this a good optimization? Do you have any particular evidence that an arguer who is willing to learn and use your language is more likely to have accurate beliefs?
It’s the other way about—I can’t think of an example where someone who didn’t know the language of any field of learning has successfully convinced that field of anything (other than that they are a fool).
I’m not saying that person is particularly ignorant—they may be quite smart in some ways—but they’re not doing what’s necessary to convince. My optimisation is to ignore them until they put in the effort—it’s much easier for them to learn the language than to do the novel thinking, after all. If that makes them frustrated, so be it.
The point is not that it keeps them frustrated, the point is that it keeps you ignorant.
Quite the reverse—it guides me to pay attention to those people who do take the trouble. It’s not as if I’m in any danger of running out of information these days.
The question still remains why you think your heuristic is particularly good.
Note that when you consider a claim, you shouldn’t set out to prove it false, or to prove it true. You should set out to find a correct conclusion about the claim, the truth about it. Not being skeptical is a particular failure mode that makes experts who you suspect of having this flaw, inappropriate source of knowledge about the claim. “Skepticism” is a similarly flawed mode of investigation.
So, the question shouldn’t be, “Who is qualified to refute the Friendly AI idea?”, but “Who is qualified to reveal the truth about the Friendly AI idea?”.
It should be an established standard to link to the previous posts on the same topic. This is necessary to actually build upon existing work, and not just create blogging buzz. In this case, the obvious reference is The Correct Contrarian Cluster, and also probably That Magical Click and Reason as memetic immune disorder.
A related post is my Survey of anti-cryonics writing.
Post also mentioned Tolerate Tolerance
Thank you!
By the way, I have spent quite a long time trying to “debunk” the set of ideas around Friendly AI and the Singularity, and my conclusion is that there’s simply no reasonable mainstream disagreement with that somewhat radical hypothesis. Why is FAI/Singularity not mainstream? Because the mainstream of science doesn’t have to publicly endorse every idea it cannot refute. There is no “court of crackpot appeal” where a correct contrarian can go to once and for all show that their problem/idea is legit. Academia can basically say “fuck off, we don’t like you or your idea, you won’t get a job at a university unless you work on something we like”.
Now such ability to arbitrarily tell people to get lost is useful because there are so many crackpots around, and they are really annoying. But it is a very simple and crude filter, akin to cutting your internet connection to prevent spam email. Just losing Eliezer and Nick Bostrom’s insight about friendly AI may cost academia more than all the crackpots put together could ever have cost.
Robin Hanson’s way around this was to expend a significant fraction of his life getting tenure, and now they can’t sack him, but that doesn’t mean that mainstream consensus will update to his correct contrarian position on the singularity; they can just press the “ignore” button.
That’s precisely the point I’m trying to make. We do lose a lot by ignoring correct contrarians. I think academia may be losing a lot of knowledge by filtering crudely. If indeed there is no mainstream academic position, pro or con, on Friendly AI, I think academia is missing something potentially important.
On the other hand, institutions need some kind of a filter to avoid being swamped by crackpots. A rational university or journal or other institution, trying to avoid bias, should probably assign more points to “promiscuous investigators,” people with respected mainstream work who currently spend time analyzing contrarian claims, whether to confirm or debunk. (I think Robin Hanson is a “promiscuous investigator.”)
I hereby nominate this for understatement of the millennium:
If true, it will eventually be accepted by the academia. Ironically enough, there will be no academia in the present sense anymore.
Does a uFAI killing all of our scientists count as them “accepting” the idea? Rhetorical question.
My social intuitions tell me it is generally a bad idea to say words like ‘kill’ (as opposed to, say, ‘overwrite’, ‘fatally reorganize’, or ‘dismantle for spare part(icle)s’) in describing scenarios like that, as they resemble some people’s misguided intuitions about anthropomorphic skynet dystopias. On Less Wrong it matters less, but if one was trying to convince an e.g. non-singularitarian transhumanist that singularitarian ideas were important, then subtle language cues like that could have big effects on your apparent theoretical leaning and the outcome of the conversation. (This is more of a general heuristic than a critique of your comment, Roko.)
Good point, but one of the possibilities is the UFAI takes long enough to become completely secure in its power that it actually does try to eliminate people as a threat or a slowing factor. Since in this scenario, unlike in the “take apart for raw materials” scenario, people dying is the UFAI’s intended outcome and not just a side effect, “kill” seems an accurate word.
Yes, it is true. I would avoid ‘overwrite’ or ‘fatally reorganize’ because people might not get the idea. Better to go with “rip you apart and re-use your constituent atoms for something else”.
I like to use the word “eat”; it’s short, evocative, and basically accurate. We are edible.
I want a uFAI lolcat that says “I can has ur constituent atomz?” and maybe a “nom nom nom” next to an Earth-sized paper clip.
I’d never thought about that, but it sounds very likely, and deserves to be pointed out in more than just this comment.
I don’t expect the post Singularity world as something pretty much as an extended today, with scientists in postlabs and postuniversities and waitresses in postpubs
A childish assumption.
Come on, where else could I possibly get my postbeer?
http://michaelnielsen.org/blog/three-myths-about-scientific-peer-review/
is a post that I find relevant.
Peer-Review is about low hanging branches, the stuff supported by enough evidence already that writing about it can be done easily by sourcing extensive support from prior work.
As for the damage of ignoring correct contrarians, there was a nobel prize in economics awarded for a paper on markets with asymmetric information which a reviewer rejected with a comment like “If this is correct then all of economics is wrong”.
There is also the story of someone who failed to get a PhD for their work presenting it on multiple seperate occasions, the last of which Einstein was in the room and said it was correct (and it was).
You might be thinking of de Broglie. Einstein was called in to review his PhD thesis. Though he did end up getting his PhD (and the Nobel).
Another near-miss case also preceding peer review was Arrhenius’s PhD thesis.
I should clarify: my position on the factual questions surrounding the Singularity/FAI is mostly the same as the consensus of the original SIAI guys: Eliezer, Mike Vassar, Carl Shulman. Perhaps I have a slightly larger probability assigned to the “Something outside of our model will happen” category, and I place a slightly longer time lag on any of this stuff happening. And this is after disagreeing significantly with them and admitting that they were right.
Does “Friendly AI and the Singularity” qualify as being “a hypothesis” in the first place?
“Friendly AI” seems more like an action plan—and “the Singularity” seems to be a muddled mixture of ideas—some of which are more accurate than others.
I think it’s worth emphasizing that ideas aren’t “worth investigating” or “not worth investigating” in themselves; different people will have different opportunities to investigate things at different costs, and will have different info and care about the answers to different degrees.
True. We have people like Mythbusters and Michael Shermer to debunk certain pseudoscientific claims, for instance. The effort to do that research is worth it, for them. For most of us, it’s only worth the effort to watch Mythbusters and read Michael Shermer.
My father is a scientist who works in an area with many crackpots (and many misguided but intelligent non-crackpots.) One of his professional duties is to investigate and usually debunk extraordinary claims in his area. It’s worth the effort for him—sometimes there’s nobody else to do the job. But most scientists free ride on his efforts.
We depend on the efforts of these people—those who are willing to investigate extraordinary or minority claims. We assume they’re out there. We assume there’s some investigator who has independent credibility.
The big problem is—what if there isn’t?
If a claim is simply ignored by everyone with independent credibility, and if it’s too much trouble for most of us to investigate ourselves, then even rational actors can make very serious mistakes.
The policy prescription is to think up ways to ensure that someone, somewhere, is bothering to investigate the kinds of claims that would be important if they were true.
I don’t disagree, but I see it as more of a continuum. All else equal, the more people investigating a claim, the better. And more importantly, one careful investigator is worth more than ten superficial investigators (e.g., Shermer on cryonics).
This is the bunk-detection strategy on TakeOnIt:
Collect top experts on either side of an issue, and examine their opinions.
If ‘1’ does not make the answer clear, break the issue down into several sub-issues, and do ‘1’ for each sub-issue.
Examples that you alluded to in your post (I threw in cryonics because that’s a contrarian issue often brought up on LW):
Global Warming
Cryonics
Climate Engineering
9-11 Conspiracy Theory
Singularity
In addition, TakeOnIt will actually predict what you should believe using collaborative filtering. The way it works, is that you enter your opinions on several issues that you strongly believe you’ve got right. It will then detect the cluster of experts you typically agree with, and extrapolate what your opinion should be for other issues, based on the assumption (explained here) that you should continue to agree with the experts you’ve previously agreed with.
You can see the predictions it’s made for my opinions here. One of the predictions is that I should believe homeopathy is bunk.
I’m unimpressed by this method. First, the procedure as given does more to reinforce pre-existing beliefs and point one to people who will reinforce those beliefs than anything else. Second, the sourcing used as experts is bad or outright misleading. For example, consider global warming. Wikipedia is listed as an expert source. But Wikipedia has no expertise and is itself an attempt at a neutral summary of experts. Even worse, Conservapedia is used both on the global warming and 9-11 pages. Considering that Conservapedia is Young Earth Creationist and thinks that the idea that Leif Erickson came to the the New World is a liberal conspiracy, I don’t think any rational individual will consider them to be a reliable source (and the vast majority of American right-wingers I’ve ever talked to about this cringe when Conservapedia gets mentioned. So this isn’t even my own politics coming into play). On cryonics we have Benjamin Franklin listed as pro. Now, that’s roughly accurate. But it is also clear that he was centuries too early to have anything resembling relevant expertise. Looking at many of the fringe subjects a large number of the so-called experts who are living today have no intrinsic justification for their expertise (actors are not experts on scientific issues for example). TakeOnIt seems devoted if anything to blurring the nature of expert knowledge to the point where it becomes almost meaningless. The Bayesian Conspiracy would not approve.
TakeOnIt records the opinions of BOTH experts and influencers—not just experts. Perhaps I confused you by not being clear about this in my original comment. In any case, TakeOnIt groups opinions by the expertise of those who hold the opinions. This accentuates—not blurs—the distinction between those who have relevant expertise and those who don’t (but who are nonetheless influential). It also puts those who have expertise relevant to the question topic at the top of the page. You seem to be saying readers will easily mistake an expert for an influencer. I’m open to suggestions if you think it could be made clearer than it is.
I don’t think they are doing as good a job as you think separating experts from non-experts. For example, they describe Conservapedia as an “encyclopedia” with no other modifier. Similarly they describe Deepak Chopra as an “expert on alternative medicine.” If they want to make a clear distinction I’d suggest having different color schemes (at minimum). Overall, to even include some of these people together is simply to give weight to views which should have effectively close to zero weight.
If Deepak Chopra is blatantly flagged as a “fake expert”, it will alienate people who are initially impressed with his arguments, and they will not participate, and they will not see all the opposing opinions. Color schemes indicating how much the site administrators believe someone to be a real expert would be mind-killing.
Upvoting for making a very valid point. I’m not completely sure though that’s necessarily the perfect solution. Wikipedia for example specifically has a set of very careful rules to handle minority viewpoints and what constitutes a reliable source or relevant expert. But it may be that that sort of thing works better in an encylopedia format (also even Wikipedia will quite Deepak on alt med things even if we spend a lot of time making clear what the science says).
No no no! It’s vital that the opinions of influential people—even if they’re completely wrong—are included on TakeOnIt. John Stuart Mill makes my point perfectly:
P.S. I updated the tag line for Conservapedia from “Encyclopedia” to “Christian Encyclopedia”. Thanks for pointing that out.
I’ve been playing with the site and from my perspective there are two problems. One is that there’s a lot of chaff. The other is that there doesn’t seem to be enough activity yet.
If there were a lot of activity, I wouldn’t necessarily mind that there are “experts” I don’t respect; it would still be extremely useful as a microcosm of the world’s beliefs. I do want to know which people the public considers to be “experts.” That’s a useful service in itself.
Censorship? Not in a political sense, of course. But there are privately owned institutions which have an interest in permitting a diversity of views. Universities, for instance. This is a site whose usefulness depends on it having no governing ideology. Blocking “unreliable” sources isn’t really censorship, but it makes the site less good at what it purports to do.
Thanks for the feedback.
Do you mean chaff as in “stuff that I personally don’t care about” or chaff as in “stuff that anyone would agree is bad”?
Yes, the site is still in the bootstrapping phase. Having said that, the site needs to have a better way of displaying recent activity.
Stuff that I think is bad, and that I would say “reasonable” people agree is bad—celebrities as experts, Deepak Chopra, mentalists, and so on. But I don’t necessarily think that’s a problem for the site. If people really get their information from those sources, then I want to know that.
I’m almost inclined to say that calling Conservapedia a Christian Encyclopedia is an insult to Christianity more than it deserves (theism is very likely incorrect but Conservapedia’s attitude towards the universe is much more separated from reality than that of most Christians). Also, I don’t think that what John Stuart Mill is talking about is the same thing. First, note that I’m not saying one should censor Chopra, merely that he’s not worth including for this sort of thing. That’s not “silencing” by any reasonable definition. And there are other experts there who I disagree with whom I wouldn’t put in that category. Thus for example, in both the cryonics and Singularity questions there are included people whom I disagree with whom I don’t think are at all helpful. Or again consider Benjamin Franklin, whose opinion on cryonics I’m sympathetic with but whom just didn’t have any knowledge that would justify considering his opinion worthy of weight.
It should be noted that TakeOnIt is setup to allow the general public to suggest expert quotes, and with a short track record as a non-spammer, people get promoted to moderator status, and can directly add a quote. So some members of TakeOnIt are impressed with Chopra, and it would be counterproductive censorship to say that they are not allowed to add his quotes. What we get in exchange for allowing this is that the general public is helping to build the database of expert opinions, and may even include real experts that we would not have known to look at.
Franklin’s quote is more about cryonics being good if it were feasible than if it is feasible. Ben, do you think it should be moved to this question?
Good call.
I see the argument for it being counterproductive which I’m tentatively convinced by. But it isn’t censorship by most definitions of the term. Saying “you can’t say X” is censorship saying “You can’t say X on my website” is not censorship. (Again, I am convinced by the counterproductivity argument so we seem to at this point be in more or less agreement if one is going to try to run TakeOnIt in a manner close to the intended general purpose).
Moving Franklin might make sense. Unfortunately, many of the people discussing cryonics are also talking about its general desirability. The questions seem to be frequently discussed together. Incidentally note that there’s a high correlation between having a moral or philosophical objection to cryonics and being likely to think it won’t work. This potentially suggests that there’s some belief overkill going on on one or both sides of this argument.
There is value in recording the opinions of anyone perceived as an expert by a segment of the general population, as it builds a track record for each supposed expert, so that the statistical analysis can reveal that the opinions of some so called experts are just noise, and give a result influenced mainly by the real experts.
See The Correct Contrarian Cluster.
That might work if we had major track records for people. Unfortunately for a lot of issues that could potentially matter (say the Singularity and Cryonics) we won’t have a good idea who was correct for some time. It seems like a better idea to become an expert on a few issues and then see how much a given expert agrees with you in the area of your expertise. If they agree with you, you should be more likely to give credence to them in their claimed areas of expertise.
Well, I would like to see more short term predictions on TakeOnIt, where after the event in question, comments are closed, and what really happened is recorded. From this data, we would extrapolate who to believe about the long term predictions.
That might work in some limited fields (economics and technological developement being obvious ones). Unfortunately, many experts don’t make short term predictions. In order for this to work one would need to get experts to agree to try to make those predictions. And they have a direct incentive not to do so since it can be used against them later (well up to a point. Psychics like Sylvia Brown make repeated wrong predictions and their followers don’t seem to mind). I give Ray Kurzweil a lot of credit for having the courage to make many relatively short term predictions (many which so far have turned out to be wrong but that’s a separate issue).
Yes, in some cases, there is no (after the fact) non-controversial set of issues to use to determine how effective an expert is. Which means that I can’t convince the general public of how much they should trust the expert, but I can still figure out how much I should trust em by looking at their positions that I can evaluate.
There is also the possibility of saying something about such an expert based on correlations with experts whose predictions can be non-controversially evaluated.
From a comment to Bryan Caplan’s contra opinion in the cryonics article:
Liked the post. One of the two big questions it’s poking at is ‘how does one judge a hypothesis without researching it?’ To do that, one has to come up with heuristics for judging some hypothesis H* that correlate well enough with correctness to work as a substitute for actual research. The post already suggests a few:
Is evidence presented for H?
Do those supporting H share data for repeatability?
Is H internally inconsistent?
Does H depend on logical fallacies?
(Debatable) Is H mainstream?
I’ll add a few more:
If H is a physical or mathematical hypothesis, try and find a quantitative statement of it. If there isn’t one, watch out: crackpots are sometimes too busy trying to overthrow a consensus to make sure the math actually works.
Suppose some event is already expected to occur as an implication of a well-established theory. If H is meant to be a novel explanation for that event, H not only has to explain the event, it also has to explain why the well-established theory doesn’t actually entail the event.
Application to global warming. To establish that something other than anthropogenic CO2 is the main driver of current global warming, it is not enough to simply suggest an alternative cause; it’s also necessary to explain why the expected warming entailed by quantum theory and anthropogenic CO2 emissions would have failed to materialize.
Can H’s fans/haters discuss H without injecting their politics? It doesn’t really matter if they sometimes mention their politics around H, but if they can’t resist the temptation to growl about ‘fascists’ or ‘political correctness’ or ‘Marxists’ or whatever every time they discuss H, watch out. (Unless H is a hypothesis about fascism, political correctness or Marxism or whatever, obviously.)
If arguments about H consistently turn into arguments about who should bear the burden of proof, there’s probably too little evidence to prove H either way.
Hypotheses that implicitly assume current trends will continue or accelerate arbitrarily far into the future should be handled with care. (An exercise I like doing occasionally is taking some time series data that someone’s fitted an exponential for and fitting an S-curve instead.)
If H based on a small selection from many available data points, is there a rationale for that selection?
Application to a Ray Kurzweil slide. Low hanging fruit I admit. Anyway, look at this graph of how long it takes for inventions to enter mass use. Kurzweil plots points for only 6 inventions: the telephone, radio, TV, the PC, the cellphone and the Web. I would be interested to see how neat the graph would be if it included the photocopier, the MP3 player, the tape player, the CD player, the internet, the newspaper, the record player, the USB flash drive, the DVD player, the car, the laser, the LED, the VHS player, the camcorder and so on. The endnotes for Kurzweil’s book ‘The Singularity Is Near’ refer to a version of this chart and estimates ‘the current rate of reducing adoption time,’ but doesn’t seem to say why Kurzweil picked the technologies he did.
Looking at the credentials of people discussing H is a quick and dirty rule of thumb, but it’s better than nothing.
Does whoever’s talking about H get the right answer on questions with clearer answers? Someone who thinks vaccines, fluoride in the drinking water and FEMA are all part of the NWO conspiracy is probably a poor judge of whether 9/11 was an inside job.
How sloppily is the case for (or against) H made? (E.g. do a lot of the citations fail to match references? Are there citations or links to evidence in the first place? Is the author calling a trend on a log-linear graph ‘exponential growth’ when it’s clearly not a straight line? Do they misspell words like ‘exponential?’)
Are possible shortcomings in H and/or the evidence for H acknowledged? If someone thinks the case for/against H is open and shut, but I’m really not sure, something isn’t right.
And Daniel Davies helpfully points out that lying (whether in the form of consistent lies about H itself, or H’s supporters/skeptics simply being known liars) can be an informative warning sign.
* The second question being ‘do we have enough people researching obscure hypotheses and if not, how do we fix that?’ I don’t know how to start answering that one yet.
This isn’t the actual epistemic situation. The usual measure of the magnitude of CO2-induced warming is “climate sensitivity”—increase in temperature per doubling of CO2 - and its consensus value is 3 degrees. But the physically calculable warming induced directly by CO2 is, in terms of this measure, only 1 degree. Another degree comes from the “water vapor feedback”, and the final degree from all the other feedbacks. But the feedback due to clouds, in particular, still has a lot of uncertainty; enough that, at the lower extreme, it would be a negative feedback that could cancel all the other positive feedbacks and leave the net sensitivity at 1 degree.
The best evidence that the net sensitivity is 3 degrees is the ice age record. The relationship between planetary temperature and CO2 levels there is consistent with that value (and that’s after you take into account the natural outgassing of CO2 from a warming ocean). People have tried to extract this value from the modern temperature record too, but it’s rendered difficult by uncertainties regarding the magnitude of cooling due to aerosols and the rate at which the ocean warms (this factor dominates how rapidly atmospheric temperature approaches the adjusted equilibrium implied by a changed CO2 level).
The important point to understand is that the full 3-degree sensitivity cannot presently be derived from physical first principles. It is implied by the ice-age paleo record, and is consistent with the contemporary record, with older and sparser paleo data, and with the independently derived range of possible values for the feedbacks. But the uncertainty regarding cloud feedback is still too great to say that we can retrodict this value, just from a knowledge of atmospheric physics.
Agreed. Nonetheless, as best I can calculate, Really Existing Global Warming (the warming that has occurred from the 19th century up to now, rather than that predicted in the medium-term future) is of similar order to what one would get from the raw, feedback-less effect of modern human CO2 emissions.
The additional radiative forcing due to increasing the atmospheric CO2 concentration from C0 to C1 is about 5.4 * log(C1/C0) W/m^2. The preindustrial baseline atmospheric CO2 concentration was about 280 ppm, and now it’s more like 388pm—plugging in C0 = 280 and C1 = 388 gives a radiative forcing gain around 1.8W/m^2 due to more CO2.
Without feedback, climate sensitivity is λ = 0.3 K/(W/m^2) - this is the expected temperature increase for an additional W/m^2 of radiative forcing. Multiplying the 1.8W/m^2 by λ makes an expected temperature increase of 0.54K.
Eyeballing the HADCRUT3 global temperature time series, I estimate a rise in the temperature anomaly from about −0.4K to +0.4K, a gain of 0.8K since 1850. The temperature boost of 0.54K from current CO2 levels takes us most of the way towards that 0.8K increase. The remaining gap would narrow if we included methane and other greenhouse gases also. Admittedly, we won’t have the entire 0.54K temperature boost just yet, because of course it takes time for temperatures to approach equilibrium, but I wouldn’t expect that to take very long because the feedbackless boost is relatively small.
This might actually be a nice exercise in choosing between hypotheses. Suppose you had no paleo data or detailed atmospheric physics knowledge, but you just had to choose between 1 degree and 3 degrees as the value of climate sensitivity, i.e. between the hypothesis that all the feedbacks cancel, and the hypothesis that they triple the warming, solely on the basis of (i) that observed 0.8K increase (ii) the elementary model of thermal inertia here. You would have to bear in mind that most anthropogenic emissions occurred in recent decades, so we should still be in the “transient response” phase for the additional perturbation they impose…
Now you’ve handed me a quantitative model I’m going to indulge my curiosity :-)
I think we can account for this by tweaking equation 4.14 on your linked page. Whoever wrote that page solves it for a constant additional forcing, but there’s nothing stopping us rewriting it for a variable forcing:
dT(t)dt=Q(t)−T(t)λCs
where T(t) is now the change in temperature from the starting temperature, Q(t) the additional forcing, and I’ve written the equation in terms of my λ (climate sensitivity) and not theirs (feedback parameter).
Solving for T(t),
T(t)=e−tλCs⎛⎝constant∫etλCsQ(t)Csdt⎞⎠
If we disregard pre-1850 CO2 forcing and take the year 1850 as t = 0, we can drop the free constant. Next we need to invent a Q(t) to represent CO2 forcing, based on CO2 concentration records. I spliced together two Antarctic records to get estimates of annual CO2 concentration from 1850 to 2007. A quartic is a good approximation for the concentration:
The zero year is 1850. Dividing the quartic by 280 gives the ratio of CO2 at time t to preindustrial CO2. Take the log of that and multiply by 5.35 to get the forcing due to CO2, giving Q(t):
Plug that into the T(t) formula and we can plot T(t) as a function of years after 1850:
The upper green line is a replication of the calculation I did in my last post—it’s the temperature rise needed to reach equilibrium for the CO2 level at time t, which doesn’t account for the time lag needed to reach equilibrium. For t = 160 (the year 2010), the green line suggests a temperature increase of 0.54K as before. The lower red line is T(t): the temperature rise due to the Q(t) forcing, according to the thermal inertia model. At t = 160, the red line has increased by only 0.46K; in this no-feedback model, holding CO2 emissions constant at today’s level would leave 0.08K of warming in the pipeline.
So in this model the time lag causes T(t) to be only 0.46K, instead of the 0.54K expected at equilibrium. Still, that’s 85% of the full equilibrium warming, and the better part of the 0.8K increase; this seems to be evidence for my guess that we wouldn’t have to wait very long to get close to the new equilibrium temperature.
If I knew that little, I guess I’d put roughly equal priors on each hypothesis, so the likelihoods would be the main driver of my decision. But to run this toy model, should I pretend the only variable forcing I know of is anthropogenic CO2? I’m going to here, because we’re assuming I don’t have ‘detailed atmospheric physics knowledge,’ and also because I haven’t run the numbers for other variable forcings.
To decide which sensitivity is more likely, I’ll calculate which value of λ produces a 0.8K increase from CO2 emissions by 2010 with this model and the above Q(t); then I’ll see if that λ is closer to the ‘3 degrees’ sensitivity (λ between 0.8 and 0.9) or the ‘1 degree’ sensitivity (λ = 0.3). For an 0.8K increase, λ = 0.646, so I’d choose the higher sensitivity, which has a λ closer to 0.646.
“How much should we be troubled, though, by the fact that most scientists of their disciplines shun them?”
This is not what’s actually going on. To quote Eliezer:
“With regard to academia ‘showing little interest’ in my work—you have a rather idealized view of academia if you think that they descend on every new idea in existence to approve or disapprove it. It takes a tremendous amount of work to get academia to notice something at all—you have to publish article after article, write commentaries on other people’s work from within your reference frame so they notice you, go to conferences and promote your idea, et cetera. Saying that academia has ‘shown little interest’ implies that I put in that work, and they weren’t interested. This is not so. I haven’t yet taken my case to academia. And they have not said anything about it, or even noticed I exist, one way or the other. A few academics such as Nick Bostrom and Ben Goertzel have quoted me in their papers and invited me to contribute book chapters—that’s about it.”
(http://en.wikipedia.org/wiki/Talk:Eliezer_Yudkowsky)
There isn’t any universal distinguishing rule, but in general you want to ask would a world where this were false, look just like our own world? A couple of useful specific guidelines:
Is this something people would be disposed to believe even if it were false?
Is this something that would be impossible to disprove even if it were false?
Flying saucers, psychic powers, and the Singularity are good examples here: suppose we lived in a world where they were not real, what would it look like? Answer, people would still believe them because we are disposed to do so (I can personally vouch for that, having spent a little bit of time as a teenager looking into flying saucers, quite a bit more time looking into psychic powers, and been a Singularitarian until a few years ago), and there is no way to disprove them because each comes with a story about why it is unobservable, so such a world would look just like our own.
For a borderline case, I’ll suggest cold fusion. Clearly it’s something we would like to believe, but it was nicely testable (the required conditions could be created with present-day technology, and low temperature fusion reactions obviously aren’t going to be motivated to hide from us or fail to work in the presence of skeptics), so it was worth investigating—and duly investigated and refuted. (Belief in cold fusion now would of course be bunk.)
I’m not sure that your comparison of the Singularity to these others works. Consider for example practical fusion reactors or space elevators. Both fit well with your rules that people would like to believe they are possible and the world would look very similar to what it looks like today even if they aren’t. There’s seems to be a major distinction between ideas like the Singularity or space elevators which contrasts with alien saucers or psychic powers: The first category has plausible mechanisms that aren’t intrinsically disruptive to major metapatterns about how the world functions. In contrast psychic powers goes against much of our understanding of how the world functions (does bad things to evolution, basic laws of physics, and amounts to a claim of irreducible mental constructs to name just three of the serious problems). As a non-Singularitan, I have to say that I find this sort of comparison deeply unpersuasive.
Oh, the two guidelines I suggested certainly aren’t a complete algorithm—that’s why I called them guidelines not rules :-) Maybe I should list a third (or first) guideline:
Is this claim extraordinary; does it contradict what we think we know about how the world works?
The Singularity definitely falls into this category; the idea that you can handwave that sort of capability into existence is contrary to everything we know about science and engineering that nothing useful happens for free and every optimization needs real-world feedback; and when you look at the details of the Singularitarian arguments, there are an awful lot of gaps of the “and then a miracle occurs” variety.
Fusion reactors are fundamentally plausible because they match both our knowledge of nuclear physics and our experiences building better engines. Interestingly, I’ve seen it credibly suggested that fusion reactors of the kind we are currently trying to build won’t work out after all, because we are trying to make them too small, so the heat radiates away too quickly, so it will cost more to run the reactor than the value of the energy generated, and we need to either change our plans or make the reactors a lot bigger. But even if true, that’s not something that could possibly have been predicted without in-depth study of the subject matter.
We may need to break down which form of the Singularity we are then discussing. See Eliezer’s list. I agree that a pure optimization process based on no connection to the real world seems unlikely. But if for example, the general AI came along at about the same time as access to marginally efficient nanotech came around, that allows a plausible method of optimization. Or to use another example, if we construct a reasonably smart general AI and it turns out that it actually requires very little comparative processing power to what we have available at the time. Either of these allow for very efficient optimization processes. Nothing in the Singularity notion goes against the fundamental picture of the world we’ve developed in the same way that say psychic powers would.
If I had to make a continuum I’d put them something in order of plausibility like:
[psychic powers, alien UFOs, Kurzweil-type Singularity, Yudkowskian Singularity, practical fusion power, space elevators] and there’s a major gap between alien UFOs and K-type Singularity. I’m not sure what would plausibly go in between them to narrow the gap. Maybe something like a Penrose version of consciousness?
Right, in truth none of the three versions really hangs together when you look at the arguments, though they are listed in decreasing order of plausibility.
“Our intuitions about change are linear” -- no they aren’t, we attach equal significance to equal percentage changes, so our intuition expects steady exponential change.
“Therefore we can predict with fair precision when new technologies will arrive, and when they will cross key thresholds, like the creation of Artificial Intelligence.”—artificial intelligence, along with flying cars, moon bases and a cure for cancer, refutes this idea by its continued nonexistence.
“To know what a superhuman intelligence would do, you would have to be at least that smart yourself.”—my brother’s cat can predict that when it meows, he will put out food for it. He cannot predict whether the cat will eat the food.
“Thus the future after the creation of smarter-than-human intelligence is absolutely unpredictable.”—the future has always been unpredictable, so by that definition we have always been in the Singularity.
“each intelligence improvement triggering an average of>1.000 further improvements of similar magnitude”—knowing whether a change is actually an improvement takes more than just thinking about it.
“Technological progress drops into the characteristic timescale of transistors (or super-transistors) rather than human neurons.”—technological progress is much slower than the characteristic timescale of neurons.
That doesn’t mean the Singularity can’t exist by some other definition,
“For example, the old Extropian FAQ used to define the “Singularity” as the Inflection Point, “the time when technological development will be at its fastest” and just before it starts slowing down.”
but as Eliezer also points out, this definition does not imply any particular conclusions.
The Penrose version of consciousness is an interesting case. It is clearly something Penrose would be disposed to believe even if it were false (he pretty much says so in The Emperor’s New Mind) and we have no way to disprove it. Is it an extraordinary claim? I would be inclined to say so, but there might be room for reasonable disagreement on that. So while I think it is false, I’m not sure I would be confident dismissing it as bunk.
Thinking about it a bit more, I wonder if my greater confidence in dismissing the Singularity than Penrose’s theory of consciousness as bunk, is influenced by the fact that the former is in my area of expertise and the latter is not. Obviously the more we know about something, the easier it is to be confident, but the original topic was possible methods of making summary judgment without detailed knowledge (given the impossibility of knowing all the details of everything).
Are there any physicists or neuroscientists in the audience who would be more confident in dismissing Penrose’s theory of consciousness?
I spent a year as a guest of Penrose’s biologist collaborator, Stuart Hameroff, at the University of Arizona, and my one peer-reviewed publication dates from that time, so I can tell you more than you want to know about this subject. :-)
First you should understand the order of events. Penrose published his book arguing that there should be a trans-Turing quantum-gravity process happening in the brain. Then Hameroff wrote to him and said, I bet it’s happening in the microtubules. Thus was born the version of the idea that most people hear about.
Penrose’s original argument combines an old interpretation of Gödel’s theorem with his own speculations about quantum gravity. The first part goes like this: For any mechanized form of mathematical reasoning, there are, necessarily, mathematical truths which it cannot prove. But we can know these propositions to be true. Therefore, human cognition must have capabilities which are not Turing-computable.
In the second part, Penrose observes that the whole of nongravitational physics is Turing-computable, but that gravitational physics is at least potentially not, because it may involve quantum sums over arbitrary 4-manifolds, and topological equivalence of 4-manifolds is not Turing-decidable. He also introduces one of his own physical ideas: Hawking evaporation of black holes appears to involve destruction of quantum information, so he proposes that conservation of probability flow is maintained by nondeterministic wavefunction collapse, which creates quantum information. He also has a technical argument against the possibility of superpositions of different geometries. So, if there are mesoscopic quantum superpositions in the brain whose components evolve towards mass distributions (and hence local space-time geometries) sufficiently different from each other that the superposition must break down, then, there is an opportunity for trans-Turing physical dynamics to play a role in human cognition.
The physical argument is very ingenious but probably wrong in two out of three places. But first, how about the prior argument using Gödel? There are two key considerations here.
Firstly, the true propositions which a formal system cannot itself prove can be proven, if you know the interpretation of the formalism, and if you know the axioms to be true and the methods of inference valid under that interpretation. In other words, knowing the semantics of the system is what allows you to construct the undecidable propositions and have an opinion about their truth. The logician Solomon Feferman has shown that if you have an extra logical primitive, “logical reflection”, which amounts to accessing this information about meanings, then there are no undecidable propositions. The combination of a valid formal system and indefinitely iterated logical reflection gets you everything.
Secondly, this makes it plain that there is a connection between the Penrose-Gödel argument, and John Searle’s problem regarding the semantics of computational states. If a thought is actually a brain state, what is it about that brain state that makes it a thought about one thing rather than another? Penrose doesn’t address this issue, yet Feferman’s analysis makes it clear that it’s metacognition or reflective cognition about meanings which produces Gödelian insights.
It is possible to attack Penrose’s ultimate conclusion by saying there’s no empirical evidence that humans can engage in logical reflection of arbitrary order. (The higher iterations of logical reflection correspond to transfinite ordinals, because they involve induction over infinite axiom sets.) If humans can only logically reflect up to order N, then a formal system of order N+1 should be capable of equaling the human ability to reason. But really, the conclusion I draw is that we will see no end to this particular dispute until we understand how neurocomputational semantics works. Until then, we simply can’t offer a neurocognitive account of advanced mathematical reasoning.
As for the physical arguments, I try to judge them from the perspective of string theory. The bit about sums over arbitrary 4-manifolds might be true; string theory is a work in progress, like most particle physics theories it’s known and used only in an approximate form, and this is a level of detail which presently is neither used nor understood. On the other hand, black hole evaporation is a unitary process in string theory, so the ingenious idea of wavefunction collapse balancing quantum information loss loses its motivation. As for the technical argument about geometric superpositions, that only applies if you think superpositions are objective physical states rather than generalized probability distributions. If you take the latter view, the argument loses its potency.
Now, microtubules. My grasp of molecular neuroscience is a whole lot less than my grasp of physics, but it’s definitely true that neuronal microtubules are not thought to play much of a role in cognition or consciousness. Microtubules are a dynamic structural organelle. They are scaffolding for the transport of vesicles, they move the chromosomes around during cell division, they are involved in pseudopod extrusion and cell motility. They occur in all your cells, not just neurons. Because a neuron is just another cell, but one which has specifically been shaped to perform an information-processing function, it’s not surprising that microtubules are involved in the execution of that function. But everything known suggests it’s a peripheral involvement.
I ended up in Arizona because I had my own reasons for being interested in quantum brain theories. And I’ll say this much in favor of microtubules: if you are looking for a molecular structure in the brain which might contain long-lived quantum states, the microtubule is a great candidate. It gives you a two-dimensional space (a cylinder) protected from environmental interaction by the tails of the tubulins. A lot of cool quantum things can happen in two dimensions. The problem is, how would it be relevant to anything cognitive?
Penrose and Hameroff wrote some papers applying Penrose’s quantum-gravity collapse model to microtubules. I don’t believe those calculations apply to reality. I’ve also mentioned why, even if you could show that quantum coherence does exist in the microtubule, that doesn’t yet connect it to conscious cognition. But I will still put in a word for Penrose’s original conception of quantum-gravitational dynamics maybe playing a part in the physics of cognition.
If one does wish to suppose—as I do—that the neural correlate of consciousness is actually a quantum state of some brain subsystem, rather than a coarse-grained classical computational state; if one does suppose that the manifest attributes of conscious experience are to be identified with fundamental degrees of freedom in that quantum object; then it is logical to suppose that some of those degrees of freedom are what we would call, from a physical perspective, gravitational, and that they might even be dynamically relevant. The idea that Feferman’s operation of conscious logical reflection is computationally implemented by a gravitational subalgebra of the full set of physical transformations of state… that’s my version of Penrose’s idea. I certainly don’t regard it as a logical necessity; it’s just a stimulating hypothesis. I look forward to the day when we know enough that I can actually rule it in or out.
Excellent explanation, thanks! So if I’m understanding correctly, while there are severe problems with Penrose’s theory, it’s not in the category of things to be casually dismissed as bunk; experts have found it an interesting line of thought to investigate, at least.
You may be putting to much emphasis on what people would be predisposed to believe. While when evaluating our own probability estimates we should correct for our emotional predispositions, it in no way says anything substantive about whether a given claim is correct or not. Tendencies to distort my map in no way impacts what the territory actually looks like.
Sure, at the end of the day there is no reliable way to tell truth from falsehood except by thorough scientific investigation.
But the topic at hand is whether, in the absence of the time or other resources to investigate everything, there are guidelines that will do better than random chance in telling us what’s promising enough to be worth how much investigation.
While the heuristic about predisposition to believe falls far short of certainty, I put it to you that it is significantly better than random chance—that in the absence of any other way to distinguish true claims from false ones, you would do quite a bit better by using that heuristic, than by flipping a coin.
(Original post.)
More generally, one can’t optimize a process of getting some kind of answers by also using such answers in particular cases where they already happen to be available. Adding this one rule collapses the whole process, as it begins to reuse arbitrary and trivial data, instead of actually doing any work. In particular, this is the reason for the groupthink failure mode. (And Loeb’s theorem!)
Thus, it’s more precise to say that the problem results from taking on faith that intolerance by others is justified, rather than protesting against excessive tolerance shown by others. When you believe others are wrong in showing excessive tolerance, you make that judgment by yourself. You should be wise to not make that judgment unless you know enough. On the other hand, if you observe that others in your group (or in mainstream) don’t tolerate a certain class of pursuits, concluding that this class of pursuits doesn’t deserve tolerance just from that is a failure mode, since this social dynamic could red-flag anything, no matter its merit. All it takes is ability to reliably induce that one inferential step, when a person newly introduced to a question looks at existing consensus and leaps to conclusion just from that, without actually considering the question.
More:
Does he use math or formal logic when a claim demands it? Does he accuse others of suppressing his views?
The Crackpot index is helpful, though it is physics centric.
I always like the Crackpot Index, but I guess it should be balanced with a list of scientists who would probably be considered crackpots because they are a bit ‘weird’, say Newton or Tesla.
Of course there are many more crackpots than there are Newtons or Teslas, but I suppose it’s good to not dismiss thing too quickly when they are radical and proposed by somewhat special individuals.
So a claim is bunk if and only if:
Those with the right kind of difficult-to-access information or who trust the relevant “expert” class will assign it an extremely low probability.
Those without that information who either don’t know or don’t trust the relevant expert class may assign it a more reasonable probability or even believe it.
The claim is false.
(?) The claim is non-trivial, if true, it would have wide-reaching implications.
So claims to have a perpetual motion machine are bunk because to understand how unlikely they are you either have to understand some physics or trust physicists. Many people do not have that information and do not trust physicists (or aren’t aware that physicists even have a position on this, or aren’t aware there are such people as physicists). And perpetual motion machines are impossible.
One issue I can see arising a lot is that not every claim will have an obvious class of experts. Once upon a time the expert class for the question of whether or not God exists was theologians. But perhaps the right expert class today is analytic philosophers where theists are a shrinking minority (under 15%). Or maybe cognitive scientists or anthropologists (whose beliefs I don’t know).
I think we ought to distinguish somehow between crackpots (believers in bunk) and incorrect contrarians. The former are obviously part of the latter but are they the same? It seems to me that even if Eliezer Yudkowsky is really wrong about a lot that he believes (and this seems possible to me) he is nonetheless not a crackpot. But is there more to this than ‘crackpots are incorrect contrarians who I don’t like or have never agreed with’? Is there an objective distinction? Perhaps because he is ignored rather than rejected?
Jack:
You ignore the possibility of crackpots who are not contrarians, but instead well established or even dominant in the mainstream. You have a very rosy view of academia if you believe that this phenomenon is entirely nonexistent nowadays!
That said, I’d say the main defining criterion of crackpots—as opposed to ordinary mistaken folks—is that their emotions have got the better of them, rendering them incapable of further rational argument. A true crackpot views the prospect of changing his mind as treachery to his cause, similar to a soldier scorning the possibility of surrender after suffering years of pain, hardship, and danger in a war. Trouble is, protracted intellectual battles in which contrarians are exposed to hostility and ridicule often push them beyond the edge of crackpottery at some point. It’s a pity because smart contrarians, even when mistaken about their main point, can often reveal serious weaknesses in the mainstream view. But then this is often why they are met with such hostility in the first place, especially in fields with political/ideological implications.
Er. I think there are plenty of people in academia who have very wrong beliefs with poor justifications. But I took our working definition of crackpot and bunk to exclude such people. We’re asking about a particular kind of being wrong: being wrong and unpopular. The question is, is there something beyond that to being a crackpot. Must you also, say, engage in pseudoscience, be non-falsifiable, or engage in unsavory tactics etc. Obviously we don’t want to debate definitions, but I think the claim that you picked out is true given the way we’ve been using the words in this thread.
Your point about emotions is a good one.
Fair enough, if we define “crackpot” as necessarily unpopular. However, what primarily comes to my mind when I hear this word is the warlike emotional state that renders one incapable of changing one’s mind, which I described in the above comment. If people like that manage to grab positions of power in the academia and don the cloak of respectability, I still think that they share more relevant similarity with various scorned crackpot contrarians than with people whose mainstream respectability is well earned.
I think a good test for a crackpot vs. an ordinary mistaken contrarian would be how this individual would behave if the power relations were suddenly reversed, and the mainstream and contrarian views changed places. A crackpot would not hesitate to use his power to extirpate the views he dislikes with all means available, whereas an non-crackpot contrarian would show at least some respect for his (now contrarian) opponents.
“It seems to me that even if Eliezer Yudkowsky is really wrong about a lot that he believes (and this seems possible to me) he is nonetheless not a crackpot. But is there more to this than ‘crackpots are incorrect contrarians who I don’t like or have never agreed with’? Is there an objective distinction? Perhaps because he is ignored rather than rejected?”
Also a question I don’t know the answer to. I wrote this post partly in response to my worries about Eliezer (and certain other autodidacts) whom I perceive not to be crackpots. Does that perception weigh in their favor, or only confirm me to be a fellow crackpot? I’m still trying to figure out what a crackpot is.
If you find yourself worrying whether a certain label applies to you, rather than wondering whether a specific set of claims are more or less likely to be true, be careful; social fears can easily derail the rational evaluation of evidence.
The question “What is bunk?” seems nigh unanswerable, a search for a dictionary definition to fill in a hanging node. Thinking in terms of “what class of claims can I dismiss as too unlikely on the face of it, and what claims have a high enough chance of truth that they’re worth investigating?” is more realistic, IMO.
The Crackpot Index is a good place to start.
There isn’t as much of a free rider problem as you make there out to be. Different people can divide their time to different subjects to investigate. Thus, we all benefit from the collective effort to investigate.
Investigating unlikely claims is also healthy in general because it helps us hone our reasoning capabilities so people investigating them may get some direct benefit.
I’m not sure I like the category of “bunk”; it seems overly broad and not clearly defined. Your definition “there are claims so cracked that they aren’t worth investigating” is not a great one since different claims have different degrees of impact on how we’d need to realign our worldview. Also, some claims may be “bunk” in one form but not in others. To use your example of Austrian economics, there might be some truth in the claims about self-organization of market forces but the deliberate attempt to avoid empirical or statistical investigation (with some members of the Austrian school more or less explicitly saying that their system is not falsifiable) renders much of Austrian economics not even wrong.
It may be more helpful to ask: When should we take minority views seriously? What should we do when the area of study in which the matter falls is not one of our expertise?
You’re right, it is mostly a question of minority views, but I’ll defend my use of “bunk” a little bit.
Not every bunk view is a minority view; the majority of Americans believe in ghosts, for example. What makes me initially estimate it unlikely that ghosts exist is not that it’s a minority opinion (it’s not) but that it contradicts the entire framework I have for understanding the physical world. I start off, therefore, with a really low prior for ghosts. So low, in fact, that it’s potentially not worth the effort of further investigation.
In the case of ghosts it doesn’t take very much effort to investigate enough to toss out the claim; ghosts are an easy case. Other topics, though, take a lot of effort to investigate, and my initial low prior isn’t based on much evidence. Misclassifying them as bunk can be costly. But classifying nothing as bunk would break the bank, in attention and effort terms. Bunk is anything which, for whatever reason (being a minority view, requiring large realignment of our worldview, etc) is too unlikely to be worth checking.
And the problem of bunk is this: if it isn’t even worth it to do a preliminary check, how do you know how unlikely it is?
What I worry about is that, given that investigation takes effort, and given that we decide whether or not to investigate based on prior estimation of how likely a claim is, there are potentially claims that we’re disbelieving for no good reason. Perhaps individuals with limited time and energy are doomed to disbelieve some claims for no good reason.
Electrical engineering is not the appropriate discipline, and neither is most of computer science. AI/cognitive science and philosophy are the closest.
Appropriate experts to “debunk” the singularity would be analytic philosophers such as David Chalmers, or AI/cognitive science people like Josh Tenebaum or Stuart Russell, Peter Norvig, etc.
David Chalmers, by the way, has come out pretty strongly in support of us. See The Singularity: A Philosophical Analysis (http://consc.net/papers/singularity.pdf).
Peter Norvig’s 2p was in: “Peter Norvig—Singularity Institute Interview Series” http://video.google.com/videoplay?docid=-6754621605046052935#
Also, thanks, I didn’t know about that. My mistake.
Bryan Caplan spends time refuting Austrians—he thinks Austrian Economics is a mistake that wastes the time of a lot of quality free market economists.
Paul Krugman also made a couple of short blog posts on the subject as well.
Surprisingly, I don’t think we’ve ever gotten deep into demarcation issues here. Anyone want to attempt demarcation criteria? Is that even a worthwhile task?
One word: attachment.
Claims like, “The singularity will occur within this century,” do not have attached implications, i.e. there aren’t any particular facts we would would expect to be able to currently observe if they were true. Things we dismiss as bunk we either have evidence that directly contradicts them, (e.g. “The Earth is 6000 years old” is directly contradicted by evidence) or we lack evidence that would expect to observe with extremely high probability were they true (e.g alien abductions—it’s rather bizarre that aliens would do such specific things and somehow invariably avoid large demographics of society. And plenty more… fleshing out this example isn’t really my goal).
Bunk claims are thus those either directly and powerfully contradicted by evidence or that lack highly expected supporting evidence. Or those that are wholly unsupported by evidence and require some kind of magic to even be possible. I suspect that’s about all there is to it.
When I abduct humans, I abduct specifically those who are known to be liars, insane, or seeking attention.
Works wonders for the problem of witnesses.
Before anyone asks: rectal probing has extensive applications in paperclip manufacturing.
You just can’t do that. At least not without some a posteriori empirical data about the said innovation. More it is an innovation, the less you can know about it in advance. And less something is a novum, better you can judge it.
You can certainly do it to some extent. Thus for example, just because their are innovations in physics that are ongoing doesn’t mean I can’t safely dismiss perpetual motion claims. And while there’s constant research in my own field (number theory) I can dismiss a lot of claims of proofs of major theorems by crackpots even though there’s ongoing research. Moreover, people in some fields are able to evaluate claims as having very low probability even though they are technically possible given what we have today. Thus for example, I have a friend who is a physicist who considers it extremely unlikely that we will ever have room temperature superconductors. If some random individual came up to him claiming to have a way of constructing them, he’d be completely justified in giving this a low confidence. I don’t know if you’d label that evidence as posteriori or not given that he has zero data about the individual claim, just the type of claim in general.
Now, as you mentioned your field, I have a crackpot idea to evolve a divisor of a big number.
How much points on the crackpot scale from 0 to 99 I’ve earned with this? Zero means no quacking at all, while 80 is something like “I have an UFO in the basement, and a private zoo with the captured aliens”. I can’t imagine 99.
I’m not sure. I’d say it would depend on if you’ve got an actual procedure to do it. If yes, pretty close to 0. If not, maybe around 40 or so. Although the term “evolve” isn’t used, there are some procedures that try to do similar things.
Consider for example primitive roots. A primitive root an integer g such that g^k runs through every possible non-zero remainder when divided by p. Thus, for example, 2 is a primitive root modulo 5, since 2^1=2 (mod 5), 2^2=4 (mod 5), 2^3=8 = 3 (mod 5) and 2^4=16=1 (mod 5) so 1,2,3, and 4 are all accounted for. 2 is not a primitive root mod 7 since one only can get as remainders 1,2 and 4. (Most people here probbably already know about primitive roots but it seemed like a good idea to just go over the basics for readers who might know. Also my assumption that most people will know may be some form of projection and I’m assuming a much higher degree of knowledge about my field than can be reasonably expected). Now, it turns out that number theorists care a lot about primitive roots. Aside from intrinsic mathematical interest, they turn out to be useful in a number of practical algorithms such as the Diffie-Hellman algorithm which is a simple to implement key exchange procedure useful in cryptography.
It turns out that every prime has a primitive root (a non-obvious fact first proved by Gauss) but for a given prime, finding a primitive root is tough in general. However, some of the procedures used to find primitive roots work off of picking a set of random numbers, checking if any is a primitive root and if not combining them in a certain way to get a number whose powers run through more remainders. One can iterate this process to eventually get a primitive root. In some sense, this is evolving an answer to the problem, although that terminology would never be used. And there are procedures to find factors which rely on not so far off procedures (although calling them evolution would be more of a stretch). So the rough idea isn’t intrinsically crackpottish. It would depend a lot on the details.
Umh.. twenty?
You’d be applying a weak optimization process to the problem instead of using your built-in much stronger one, and hoping that its different set of biases will let it hit on a useful algorithm that you yourself wouldn’t.
Intuitively, math-space is too big and twisted for evolution to work, and it’d suffer horribly from getting stuck on local maxima. I don’t know this for certain, however, and even if you fail you’ll still have learned something.
At least not always. At least.
Yes, but a perpetual machine would be an innovation par excellence, wouldn’t it be? Especially for you and me and everybody else, who are almost certain, it’s not possible.
Yes, again. But whatever is quite familiar for you, what you can easily grasp, is not a big innovation for you. Maybe important, but not that innovative. You have thought similar thoughts already.
I tend to agree with him. Anyway, superconductivity would be a very important but not a very innovative thing. Unless based on some completely unexpected principles. Then it would be innovative too.
Could you expand on what you mean by innovative then? How do you define something as innovative?
Done on a new way. Unprecedented and mainly unexpected. It doesn’t mean that it is very important then, only a surprise for almost everyone.
http://wordnetweb.princeton.edu/perl/webwn?s=innovativeness
Check!
I’m still not clear on this definition as it applies to what the top-level post discussed. Everything in the top level post are ideas that aren’t unprecedented. Many of these ideas have been around for a very long time. So only talking about ideas which are unprecedented and mainly unexpected seems unhelpful. Also, I’m not sure what constitutes unprecedented in this context.