I am a relative newbie commenter here, and my interest in this site has so far been limited to using it as a fun forum where it’s possible to discuss all kinds of sundry topics with exceptionally smart people. However, I have read a large part of the background sequences, and I’m familiar with the main issues of concern here, so even though it might sound impertinent coming from someone without any status in this community, I can’t resist commenting on this article.
To put it bluntly, I think the main point of the article is, if anything, an understatement. Let me speak from personal experience. From the perspective of this community, I am a sort of person who should be exceptionally easy to get interested and won over to its cause, considering both my intellectual background and my extreme openness to contrarian viewpoints and skepticism towards the official academic respectability as a criteron of truth and intellectual soundness. Yet, to be honest, even though I find a lot of the writing and discussion here extremely interesting, and the writings of Yudkowsky (in addition to others such as Bostrom, Hanson, etc.) have convinced me that technology-related existential risks should be taken much more seriously than they presently are, I still keep encountering things in this community that set off various red flags, which are undoubtedly taken by many people as a sign of weirdness and crackpottery, and thus alienate huge numbers of potential quality audience.
Probably the worst such example I’ve seen was the recent disturbance in which Roko was subjected to abuse that made him leave. When I read the subsequent discussions, it surprised me that virtually nobody here appears to be aware what an extreme PR disaster it was. Honestly, for someone unfamiliar with this website who has read about that episode, it would be irrational not to conclude that there’s some loony cult thing going on here, unless he’s also presented with enormous amounts of evidence to the contrary in the form of a selection of the best stuff that this site has to offer. After these events, I myself wondered whether I want to be associated with an outlet where such things happen, even just as an occasional commenter. (And not to even mention that Roko’s departure is an enormous PR loss in its own right, in that he was one of the few people here who know how to write in a way that’s interesting and appealing to people who aren’t hard-core insiders.)
Even besides this major PR fail, I see many statements and arguments here that may be true, or at least not outright unreasonable, but should definitely be worded more cautiously and diplomatically if they’re given openly for the whole world to see. I’m not going to get into details of concrete examples—in particular, I do not concur unconditionally with any of the specific complaints from the above article—but I really can’t help but conclude that lots of people here, including some of the most prominent individuals, seem oblivious as to how broader audiences, even all kinds of very smart, knowledgeable, and open-minded people, will perceive what they write and say. If you want to have a closed inner circle where specific background knowledge and attitudes can be presumed, that’s fine—but if you set up a large website attracting lots of visitors and participants to propagate your ideas, you have to follow sound PR principles, or otherwise its effect may well end up being counter-productive.
I agree completely. I still read LessWrong because I am a relatively long-time reader, and thus I know that most of the people here are sane. Otherwise, I would conclude that there is some cranky process going on here. Still, the Roko affair caused me to significantly lower my probabilities assigned to SIAI success and forced me to seriously consider the hypothesis that Eliezer Yudkowsky went crazy.
By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality, as the blog’s header proudly states, while instead the posts often discuss relatively narrow list of topics which are only tangentially related to rationality. E.g. cryonics, AI stuff, evolutionary psychology, Newcomb-like scenarios.
By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality
Part of that mission is to help people overcome the absurdity heuristic, and to help them think carefully about topics that normally trigger a knee-jerk reflex of dismissal on spurious grounds; it is in this sense that cryonics and the like are more than tangentially related to rationality.
I do agree with you that too much of the newer material keeps returning to those few habitual topics that are “superstimuli” for the heuristic. This perhaps prevents us from reaching out to newer people as effectively as we could. (Then again, as LW regulars we are biased in that we mostly look at what gets posted, when what may matter more for attracting and keeping new readers is what gets promoted.)
A site like YouAreNotSoSmart may be more effective in introducing these ideas to newcomers, to the extent that it mostly deals with run-of-the-mill topics. What makes LW valuable which YANSS lacks is constructive advice for becoming less wrong.
As for overcoming absurdity heuristics, more helpful would be to illustrate its inaproppriateness (is this a real word?) on thoughts which are seemingly absurd while having a lot of data proving them right, rather than predictions like Singularity which are mostly based on … just different heuristics.
One good sign here is that LW, unlike most other non-mainstream organizations, doesn’t really function like a cult. Once one person starts being critical, critics start coming out of the woodwork. I have my doubts about this place sometimes too, but it has a high density of knowledgeable and open-minded people, and I think it has a better chance than anyone of actually acknowledging and benefiting from criticism.
I’ve tended to overlook the weirder stuff around here, like the Roko feud—it got filed under “That’s confusing and doesn’t make sense” rather than “That’s an outrage.” But maybe it would be more constructive to change that attitude.
Singularitirianism, transumanism, cryonics, etc probably qualify as cults under at least some of the meanings of the term: http://en.wikipedia.org/wiki/Cult Cults do not necessarily lack critics.
The wikipedia page on Cult Checklists includes seven independent sets of criteria for cult classification, provided by anti-cult activists who have strong incentives to cast as wide a net as possible. Singularitarianism, transhumanism, and cryonics fit none of those of lists. In most cases, it isn’t even close.
I disagree with your assessment. Let’s just look at Lw for starters.
Eileen Barker:
It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially—note for example the many instances of folks worrying they will not be able to find a sufficiently “rationalist” significant other.
Huge portions of the views of reality of many people here have been shaped by this community, and Eliezer’s posts in particular; many of those people cannot understand the math or argumentation involved but trust Eliezer’s conclusions nonetheless.
Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.
Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.
Nope. Though some would credit Eliezer with trying to become or create God.
Obviously. Less Wrong is quite focused on rationality (though that should not be odd) and Eliezer is rather… driven in his own overarching goal.
Based on that, I think Eileen Barker’s list would have us believe Lw is a likely cult.
Shirley Harrison:
I’m not sure if ‘from above’ qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.
While ‘revealed’ is not necessarily accurate in some senses, the “Sequences” are quite long and anyone who tries to argue is told to “read the Sequences”. Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.
Nope
Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.
This one is questionable. But surely Eliezer is trying the advanced technique of sharing part of his power so that we will begin to see the world the way he does.
There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.
No sign of this
“Exclusivity - ‘we are right and everyone else is wrong’”. Very yes.
Based on that, I think Shirley Harrison’s list would have us believe Lw is a likely cult.
Similar analysis using the other lists is left as an exercise for the reader.
Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.
I believe that most LW posters are not signed up for cryonics (myself included), and there is substantialdisagreement about whether it’s a good idea. And that disagreement has been well received by the “cult”, judging by the karma scores involved.
Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.
Theism has been discussed. It is wrong. But Robert Aumann’s work is still considered very important; theists are hardly dismissed as “satanic,” to use Barker’s word.
Of Barker’s criteria, 2-4 of 6 apply to the LessWrong community, and only one (“Leaders and movements who are unequivocally focused on achieving a certain goal”) applies strongly.
On Shirley Harrison:
I’m not sure if ‘from above’ qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.
I can’t speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.
While ‘revealed’ is not necessarily accurate in some senses, the “Sequences” are quite long and anyone who tries to argue is told to “read the Sequences”. Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.
No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.
Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.
What you describe is a prosperous exaggeration, not “[t]otalitarianism and alienation of members from their families and/or friends.”
There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.
Any person who promotes a charity at which they work is pushing a cult, by this interpretation. Eliezer isn’t “lining his own pockets”; if someone digs up the numbers, I’ll donate $50 to a charity of your choice if it turns out that SIAI pays him a salary disproportionally greater (2 sigmas?) than the average for researchers at comparable non-profits.
So that’s 2-6 of Harrison’s checklist items for LessWrong, none of them particularly strong.
My filters would drop LessWrong in the “probably not a cult” category, based off of those two standards.
Eliezer was compensated $88,610 in 2008 according to the Form 990 filed with the IRS and which I downloaded from GuideStar.
Wikipedia tells me that the median 2009 income in Redwood where Eliezer lives is $69,000.
(If you are curious, Tyler Emerson in Sunnyvale (median income 88.2k) makes 60k; Susan Fonseca-Klein also in Redwood was paid 37k. Total employee expenses is 200k, but the three salaries are 185k; I don’t know what accounts for the difference. The form doesn’t seem to say.)
I can’t speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.
What exactly are Eliezer’s qualifications supposed to be?
I’m definitely not trying to attack anyone (and you’re right my comment could be read that way). But I’m also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He’s probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn’t expect any one person to solve a Hard problem. Insight isn’t that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.
No one looks at open problems in other fields this way.
No one looks at open problems in other fields this way.
Yes, the situation isn’t normal or good. But this isn’t a balanced comparison, since we don’t currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.
I may have an incorrect impression, but SIAI or at least Eliezer’s department seems to have a self-image comparable to the Manhattan project rather than early pioneers of a scientific field.
I may have an incorrect impression, but SIAI or at least Eliezer’s department seems to have a self-image comparable to the Manhattan project
Eliezer has said: “I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me.” Your call as to whether you believe that. (The rest of that post, and some of his other posts in that discussion, address some points similar to those that you raised.)
That said, “self-image comparable to the Manhattan project” is an unusually generous ascription of humility to SIAI and Eliezer. :P
...producing a provable mathematical formalization of friendliness [...] And Eliezer is obviously a really smart guy. He’s probably even the most likely person to solve it.
I haven’t seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.
Which statement are you talking about? Saying someone is the most likely person to do something is not the same as saying they are likely to do it. You haven’t said anything in this comment than I disagree with so I don’t understand what we’re disputing.
To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.
How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky’s publications and all of the LW sequences? You could argue that he and other people don’t have the smarts to grasp Yudkowsky’s arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?
The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.
The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn’t have to be free of people who disagree with it to be influential, and it doesn’t even have to be correct.
How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?
Level up first. I can’t evaluate physics research, so I just accept that I can’t tell which of it is correct; I don’t try to figure it out from the politics of physicists arguing with each other, because that doesn’t work.
Level up first. I can’t evaluate physics research, so I just accept that I can’t tell which of it is correct; I don’t try to figure it out from the politics of physicists arguing with each other, because that doesn’t work.
But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.
Would you tell a politician to go and read the sequences and if, after reading the publications, they don’t see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?
Writing is influential when many people are influenced by it.
You talked about Yudkowsky’s influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don’t think they influenced the right people.
Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu’s comment is born of motivated cognition to a greater extent than your own comments.
Moreover, I believe that even when such statements are true, one should avoid making them when possible as they’re easily construed as personal attacks which tend to spawn an emotional reaction in one’s conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.
Moreover, I believe that even when such statements are true, one should avoid making them when possible
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you’re going wrong, you won’t improve.
as they’re easily construed as personal attacks which tend to spawn an emotional reaction in one’s conversation partners
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
pushing them into an arguments as soldiers mode which is detrimental to rational discourse.
On this blog, any person should definitely be resisting this push.
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you’re going wrong, you won’t improve.
I did not say that one should avoid telling people when and where they’re going wrong. I was objecting to the practice of questioning people’s motivations. For the most part I don’t think that questioning somebody’s motivations is helpful to him or her.
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn’t mean that the commentators are always above this sort of thing.
I agree with you insofar as I think that one work to interpret comments charitably.
On this blog, any person should definitely be resisting this push.
I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
I was objecting to the practice of questioning people’s motivations.
Not questioning their motivations; you objected to the practice of pointing out motivated cognition:
I find it unlikely that you have enough information to make a confident judgment that XiXiDu’s comment is born of motivated cognition … Moreover, I believe that even when such statements are true, one should avoid making them when possible
Pointing out that someone hasn’t thought through the issue because they are motivated not to—this is not an attack on their motivations; it is an attack on their not having thought through the issue. Allowing people to keep their motivated cognitions out of respect for their motivations is wrong, because it doesn’t let them know that they have something wrong, and they miss a chance to improve it.
Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise.
To paraphrase steven, if you’re interested in winning disputes you should dismiss personal attacks, but if you’re interested in the truth you should dig through their personal attacks for any possible actual arguments. Whether or not it’s a personal attack, you ought to construe it as if it is not, in order to maximise your chances of finding truth.
this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
Agreed. I think the first two parts of our comments address whether one should exert such a push. I think you’re right, and this whole third part of our discussion is irrelevant.
It’s quite possible to be inaccurate about other people’s motivations, and if you are, then they will have another reason to dismiss your argument.
How do you identify motivated cognition in other people?
Not thinking something through could be habitual sloppiness, repeating what one has heard many times, or not thinking that a question is worthy of much mental energy rather than a strong desire for a particular conclusion. (Not intended as a complete list.)
Making a highly specific deduction from an absence rather than a presence strikes me as especially likely to go wrong.
How do you identify motivated cognition in other people?
Some of the same ways I see it in myself. Specifically, when dealing with others:
Opposed to easy (especially quick or instant) tests: strong evidence of motivated stopping.
All for difficult (especially currently-impossible) tests: moderate evidence of motivated continuing.
Waiting on results of specific test to reconsider or take a position: moderate evidence of motivated continuing.
Seemingly-obvious third alternative: very strong evidence of motivated stopping. Caveat! this one is problematic. It is very possible to miss third alternatives.
Opposed to plausible third alternatives: weak evidence of motivated stopping—strong evidence with a caveat and split, as “arguments as soldiers” can also produce this effect. Mild caveat on plausibility being somewhat subjective.
In the case of XiXiDu’s comment, focusing on Ben Goertzel’s rejection is an example of waiting on results from a specific test. That is enough evidence to locate the motivated continuing hypothesis¹ - ie, that XiXiDu does not want to accept the current best-or-accepted-by-the-community answer.
The questions XiXiDu posed afterwards seem to have obvious alternative answers, which suggests motivated stopping. He seems to be stopping on “Something’s fishy about Eliezer’s setup”.
¹: As well as “Goertzel is significantly ahead of AI development curve”, “AGI research and development is a field with rigid formal rules on what does and doesn’t convince people”—the first is easily tested by looking at Ben’s other views, the second is refuted by many researchers in that field
Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer’s stance.
For what it’s worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He’s also responsible for coining “Seed AI”.
Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer’s stance.
Indeed, I was just trying to figure out how someone with money or power who wants to know what is the right thing to do but who does not have the smarts should do. Someone like a politician or billionaire who would either like to support some AGI research or the SIAI. How are they going to decide what to do if all AGI experts tell them that there is no risk from AGI research and that the SIAI is a cult when at the same time the SIAI tells them the AGI experts are intellectual impotent and the SIAI is the only hope for humanity to survive the AI revolution. What should someone who does not have the expertise or smarts to estimate those claims, but who nevertheless has to decide how to use his power, should do? I believe this is not an unrealistic scenario as many rich or powerful people want to do the right thing, yet do not have the smarts to see why they should trust Yudkowsky instead of hundreds of experts.
For what it’s worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He’s also responsible for coining “Seed AI”.
Interesting, when did he come up with the concept of “Seed AI”. Because it is mentioned in Karl Schroeder’s Ventus (Tor Books, 2000.) ISBN 978-0312871970.
Didn’t find the phrase “Seed AI” there. One plot element is a “resurrection seed”, which is created by an existing, mature evil AI to grow itself back together in case it’s main manifestation is destroyed. A Seed AI is a different concept, it’s something the pre-AI engineers put together that grows into a superhuman AI by rewriting itself more and more powerful. A Seed AI is specifically a method to get to AGI from not having one, not just an AI that grows from a seed-like thing. I don’t remember recursive self-improvement being mentioned with the seed in Ventus.
A precursor concept where the initial AI bootstraps itself by merely learning things, not necessarily by rewriting it’s own architecture, goes all the way back to Alan Turing’s 1950 paper on machine intelligence.
Look at it this way. Once long ago two kinds of work converged. We’d figured out how to make machines that could make more machines. And we’d figured out how to get machines to… not exactly think, but do something very much like it. So one day some people built a machine which knew how to build a machine smarter than itself. That built another, and that another, and soon they were building stuff the men who made the first machine didn’t even recognize.
[...]
And, some of the mechal things kept developing, with tremendous speed, and became more subtle than life. Smarter than humans. Conscious of more. And, sometimes, more ambitious. We had little choice but to label them gods after we saw what they could do—namely, anything.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
Yes, but I believe it is a bit weird for a Wikipedia article to state that someone is the originator of the Seed AI theory when he just coined the term. I wasn’t disputing anything, just trying to figure out if it is actually the case that Yudkowsky came up with the concept in the first place.
They did not command the wealth of nations, these researchers. Although their grants amounted to millions of Euros, they could never have funded a deep-space mission on their own, nor could they have built the giant machineries they conceived of. In order to achieve their dream, they built their prototypes only in computer simulation, and paid to have a commercial power satellite boost the Wind seeds to a fraction of light speed. [...] no one expected the Winds to bloom and grow the way they ultimately did.
It is further explained that the Winds were designed to evolve on their own so they are not mere puppets of human intentions but possess their own intrinsic architecture.
In other places in the book it is explained how humans did not create their AI Gods but that they evolved themselves from seeds designed by humans.
Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer’s stance.
I don’t think the failure of someone to be convinced of some position is ever strong evidence against that position. But this argument here is genuinely terrible. I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?
If someone is unable to examine the available evidence and come to a sane conclusion on a particular topic, this makes it less likely that they are able to examine the available evidence and to sane conclusions on other topics.
I don’t take Goertzel seriously for the same reason I don’t take young earth creationists seriously. It’s not that I disagree with him, it’s that his beliefs have almost no connection to reality.
(If it makes you feel better, I have read some of Goertzel’s writing on AGI, and it’s stuffed full of magical thinking.)
And computer scientists haven’t understood the self – because it isn’t about computer science. It’s about the emergent dynamics that happen when you put a whole bunch of general and specialized pattern recognition agents together – a bunch of agents created in a way that they can really cooperate – and when you include in the mix agents oriented toward recognizing patterns in the society as a whole.
and
The goal systems of humans are pretty unpredictable, but a software mind like Novamente is different – the goal system is better-defined. So one reasonable approach is to make the first Novamente a kind of Oracle. Give it a goal system with one top-level goal: To answer peoples’ questions, in a way that’s designed to give them maximum understanding.
It’s possible that with sufficient real-world intelligence tends to come a sense of connectedness with the universe that militates against squashing other sentiences.
I mostly disagree with Ben, but I don’t think judging him based on that paper is fair. It’s pretty bad, but it was also written in 1996. Fourteen years is a lot of time to improve as a thinker.
I had that thought too, and I was thinking of retracting or amending my comment to that effect, but looking at some of his later publications in the same journal(?) suggests that he hasn’t leveled up much since then.
“The Futility Of Emergence” really annoys me. It’s a perfectly useful word. It’s a statement about the map rather than about the territory, but it’s a useful one. Whereas magic means “unknowable unknowns”, emergent means “known unknowns”—the stuff that we know follows, we just don’t know how.
e.g. Chemistry is an emergent property of the Schrodinger equation, but calculating anything useful from that is barely in our grasp. So we just go with the abstraction we know, and they’re separate sciences. But we do know we have that work to do.
Just linking to that essay every time someone you’re disagreeing with says “emergent” is difficult to distinguish from applause lights.
Saying the word “emergent” adds nothing. You’re right that it’s not as bad as calling something magic and declaring that it’s inherently unknowable, but it also offers zero explanatory power. To reword your example:
Chemistry is a property of the Schrodinger equation, but calculating anything useful from that is barely in our grasp. So we just go with the abstraction we know, and they’re separate sciences. But we do know we have that work to do.
There is absolutely no difference in meaning when you take the word “emergent” out. That’s why it isn’t useful, which Eliezer was pointing out.
Nitpick: I don’t think that is exactly what EY was pointing out. Take a look at the comments and the general response of “Huh? Who makes that mistake?” It seems EY was complaining about the tendency of AGI researchers to use “emergence” as if it were an explanation, not ordinary use of the word that doesn’t pretend it is one but just, say, points out that the behavior is surprising given what it’s composed of, or that your current methods aren’t powerful enough to predict the consequences. He didn’t seem to have realized that particular mistake was mostly localized to AGI people.
It seems more likely that when the cited people said “intelligence is an emergent phenomenon”, they were misunderstood as proposing that as a satisfactory explanation of the phenomenon.
There is absolutely no difference in meaning when you take the word “emergent” out. That’s why it isn’t useful, which Eliezer was pointing out.
I’m not entirely sure this is correct. I wouldn’t call the trajectories of planets and galaxies “properties” of Relativity, but I would call it emergent behavior due to Relativity. It’s a stylistic and grammatical choice, like when to use “which” and when to use “that.” They may seem the same to the uninitiated, but there’s a difference and the initiated can tell when you’re doing it wrong.
So, I agree with David Gerard that trying to eradicate the use of the word is misplaced. It’d be like saying “the word ‘which’ is obsolete, we’re only going to use ‘that’ and look down on anyone still using ‘which’.” You lose far more by such a policy than you gain.
I don’t take Goertzel seriously for the same reason I don’t take young earth creationists seriously. It’s not that I disagree with him, it’s that his beliefs have almost no connection to reality.
From what I’ve seen, the people who comment here who have read Broderick’s book have come away, if not convinced psy describes some real physical phenomena, convinced that the case isn’t at all open and shut the way young earth creationism is. When an issue is such that smart, sane people can disagree then you have to actually resolve the object level disagreement before you can use someone’s beliefs on the issue in a general argument about their rationality. You can’t just assume it as you do here.
Goertzel’s paper on the subject is about extending the de Broglie Bohm pilot wave theory in a way that accounts for psi while being totally consistent with all known physics. Maybe it is nonsense, I haven’t read it. But you can’t assume it is.
Maybe it is nonsense, I haven’t read it. But you can’t assume it is.
I disagree. I do not need to (and should not) discard my priors when evaluating claims.
It would be an error in reasoning on my part if I did not account for the low prior (to reading it) probability of a psyonics theory being sane when evaluating the proponents other claims. For emphasis: not lowering my confidence in Goertzel’s other beliefs because he is a proponent of psi without me having read his paper would be an outright mistake.
I also note that you defending Goertzel on the psi point is evidence against Goertzel’s beliefs regarding AI. Extremely weak evidence.
Could you unpack your reasoning? Do you mean that Jack defending Goertzel on psi discredits defense of Goertzel on AI because it shows such defense to be less correlated to the validity of the opinion than previously thought? Or did you drop a negation or something and mean the opposite of what you wrote, because Jack defending Goertzel on psi is very slight evidence of Goertzel’s opinion on psi not being as crazy as you previously thought?
I don’t think the failure of someone to be convinced of some position is ever strong evidence against that position.
Ever is a strong word. If a competent expert in a field who has a known tendency to err slightly on the side of too much openness to the cutting edge fails to be convinced by a new finding within his field that says an awful lot.
I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?
That is simply not the form of the argument you quote. “Ben Goertzel believes in psychic phenomenon” can not be represented as “I disagree with person x ”.
That is simply not the form of the argument you quote. “Ben Goertzel believes in psychic phenomenon” can not be represented as “I disagree with person x ”.
I’m being generous and giving the original comment credit for an implicit premise. As stated the argument is “Person x believes y, therefore person x is wrong about z.” this is so obviously wrong it makes my head hurt. WrongBot’s point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn’t provide any evidence to that effect it reduces to ‘I disagree with Goertzel about psy’.
I generally don’t try to provide evidence for every single thing I say, and I am especially lax about things that I consider to be incredibly obvious.
But I’m annoyed enough to lay out a very brief summary of why belief in PSI is ludicrous:
It isn’t permitted by known physics.
There are no suggested mechanisms (so far as I’m aware) for PSI which do not contradict proven physical laws.
The most credible studies which claim to demonstrate PSI have tiny effect sizes, and those haven’t been replicated with larger sample sizes.
Publication bias.
PSI researchers often seem to possess motivated cognition.
We’ve analyzed the functioning of individual neurons pretty closely. If there are quantum microtubules or other pseudoscientific nonsense in them, they don’t seem to affect how those individual neurons behave.
No one has to give evidence for everything they say but when things that you thought were obviously wrong begin to get defended by physics-literate reductionist materialists that seems like a good time to lower your confidence.
There are no suggested mechanisms (so far as I’m aware) for PSI which do not contradict proven physical laws.
Well to begin with, Goertzel’s paper claims to be such a mechanism. Have you read it? I don’t know if it works or not. Seems unwise to assume it doesn’t though.
Publication bias, motivated cognition and effect size are all concerns and were my previous explanation. I found this meta-analysis upset that view for me.
If PSI exploits weird physics in a complicated manner and produces such tiny effects, where the hell did the mechanism come from? PSI would obviously be a very useful adaptation, so why don’t we see it in other species? Why aren’t the effects stronger, since there’s such a strong evolutionary pressure in favor of them?
Goertzel’s paper also includes psychokinesis as a PSI phenomenon supported by strong evidence. I would love to see the study he’s talking about for that one. Or a video.
If PSI exploits weird physics in a complicated manner and produces such tiny effects, where the hell did the mechanism come from? PSI would obviously be a very useful adaptation, so why don’t we see it in other species? Why aren’t the effects stronger, since there’s such a strong evolutionary pressure in favor of them?
All of this is also discussed in Outside the Gates. I can try to dig up what he said this weekend.
Goertzel’s paper also includes psychokinesis as a PSI phenomenon supported by strong evidence. I would love to see the study he’s talking about for that one. Or a video.
The experiments aren’t macroscopic. The results involve statistical deviations from expected normal distributions of say, white noise generators when participants try to will the results in different directions. I don’t think these results are nearly as compelling as other things, see Jahn and Dunne 2005 for example. They had some methodological issues and the one attempt that was made at replication, while positive, wasn’t significant at anywhere near the level of the original.
If you’re actually interested you should consider checking out the book. It is a quick, inexpensive read. Put it this way: I’m not some troll who showed up here to argue about parapsychology. Six months ago I was arguing your position here with someone else and they convinced me to check out the book. I then updated significantly in the direction favoring psi (not enough to say it exists more likely than not, though). Everything you’ve said is exactly what I was saying before. It turns out that there are sound responses to a lot of the obvious objections, making the issue not nearly as clear cut as I thought.
As stated the argument is “Person x believes y, therefore person x is wrong about z.” this is so obviously wrong it makes my head hurt.
It would be wrong if it were a logical deduction instead of an inference. That is, if WrongBot actually wrote ‘therefore’ or otherwise signaled absolute deductive certainty then he would be mistaken. As is he presents it as evidence, which it in fact is.
WrongBot’s point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn’t provide any evidence to that effect it reduces to ‘I disagree with Goertzel about psy’.
There is a clear implied premise ‘psychic phenomenon are well known to be bullshit’. Not all baseline premises must be supported in an argument. Instead, the argument should be considered stronger or weaker depending on how reliable the premises are. I don’t think WrongBot loses too much credibility in this case by dismissing psychic phenomenon.
It would be wrong if it were a logical deduction instead of an inference. That is, if WrongBot actually wrote ‘therefore’ or otherwise signaled absolute deductive certainty then he would be mistaken. As is he presents it as evidence, which it in fact is.
It isn’t even evidence until you include a premise about the likelihood of y, which we agree is the implied premise.
There is a clear implied premise ‘psychic phenomenon are well known to be bullshit’. Not all baseline premises must be supported in an argument. Instead, the argument should be considered stronger or weaker depending on how reliable the premises are. I don’t think WrongBot loses too much credibility in this case by dismissing psychic phenomenon.
I think I’m just restating the exchange I had with komponisto on this point. Goertzel’s position isn’t that of someone who is doesn’t know any physics or Enlightenment-style rationality. It is clearly a contrarian position which should be treated rather differently since we can assume he is familiar with the reasons why psychic phenomena are ‘well known to be bullshit’. It is a fully generalizable tactic which can be used against all and any contrarian thinkers. Try “Robin Hanson thinks we should cut health care spending 50%, therefore he is less likely to be right about fertility rate.”
It isn’t even evidence until you include a premise about the likelihood of y, which we agree is the implied premise.
This is obviously going to be the case when trying to convince an individual of something. The beliefs (crackpot or otherwise) of the target audience are always going to be relevant to persuasively. As a comment directed in part to the wider lesswrong audience the assumed premises will be different.
Try “Robin Hanson thinks we should cut health care spending 50%, therefore he is less likely to be right about fertility rate.”
If I were a reader who thought Robin’s position on health care was as implausible as belief in magic and thought that making claims about the fertility was similar to AI strategy then I would take this seriously. As it stands the analogy is completely irrelevant.
Any argument of this nature needs to include some explanation of why someone’s ability to think about y is linked to their ability to think about z. But even with that (which wasn’t included in the comment) you can only conclude that y and z imply each other. You can’t just conclude z.
In other words, you have to show Goertzel is wrong about psychic phenomenon before you can show that his belief in it is indicative of reasoning flaws elsewhere.
I got the impression from Damien Broderick’s book that a lot of PSI researchers do understand physics and aren’t postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use. There’s a story that Einstein was interested in PSI research, but declared it nonsense when the claimed results showed PSI effects that weren’t subject to the inverse square law, so this isn’t a new idea.
Damien Broderick’s attitude in his book is basically that there’s a bunch of anomalous observations and neither a satisfactory explanation or, in his opinion, a refutation for them exists. Goertzel’s attitude is to come up with a highly speculative physical theory that could explain that kind of phenomena, and which would take a bit more than “would need extra particles” to show as nonsense.
“Not understanding basic physics” doesn’t really seem to cut it in either case. “It’s been looked into by lots of people, a few of them very smart, for 80 years, and nothing conclusive has come out of it, so most likely there isn’t anything in it, and if you still want to have a go, you better start with something the smart people in 1970s didn’t have” is basically the one I’ve got.
I’m not holding my breath over the recent Bem results, since he seems to be doing pretty much the same stuff that was done in the 70s and always ended up failing one way or the other, but I’m still waiting for someone more physics-literate to have a go at Goertzel’s pilot wave paper.
I got the impression from Damien Broderick’s book that a lot of PSI researchers do understand physics and aren’t postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use...
“Not understanding basic physics” doesn’t really seem to cut it in either case
“Not understanding basic physics” sounds like a harsh quasi-social criticism, like “failing at high-school material”. But that’s not exactly what’s meant here. Rather, what’s meant is more like “not being aware of how strong the evidence against psi from 20th-century physics research is”.
The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H [EDIT: technically, this is not necessarily true, but it usually is in practice, and becomes more likely as P(H|M) approaches 0]. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.
The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.
Evidence distinguishes between not for individual models. There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.
Evidence distinguishes between not for individual models.
By the Bayesian definition of evidence, “evidence for” a hypothesis (including a “model”, which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.
There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.
Carroll claims that current data implies the probability of such models being correct is near zero. So I’d like to invoke Aumann here and ask what your explanation for the disagreement is. Where is Carroll’s (and others’) mistake?
including a “model”, which is just a name for a complex conjunction of hypotheses
If models are just complex conjunctions of hypotheses then the evidence that confirms models will often confirm some parts of the model more than others. Thus the evidence does little to distinguish the model from a different model which incorporates slightly different hypotheses.
By the Bayesian definition of evidence, “evidence for” a hypothesis (including a “model”, which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.
Yes, but this depends on what other hypotheses are considered in the “false” case.
One typically works with some limited ensemble of possible hypotheses
Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional “something not on this list” hypothesis that covers everything else
You appear to be thinking in terms of ad-hoc statistical techniques (“computable”, “one typically works...”), rather than fundamental laws governing belief. But the latter is what we’re interested in in this context: we want to know what’s true and how to think, not what we can publish and how to write it up.
Let me put it this way: excluding a hypothesis from the model space is merely the special case of setting its prior to zero. Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses, even if no hypothesis goes from possible to not or vice-versa.
As this is prior dependent, there is no objective measure of whether a hypothesis is supported or rejected by evidence.
(This is obviously true when we look at P(H_i|e). It’s a bit less so when we look at P(e|H) vs P(e|~H). This seems objective. It is objective in the case that H and ~H are atomic hypotheses with a well-defined rule for getting P(e|~H). But if ~H is an or of “all the other theories”, than P(e|~H) is dependent on the prior probabilities for each of the H_i that are the subcomponents of ~H. It’s also an utterly useless by itself for judging H. We want to know P(H|e) for that. (P(e|H) is of course why we want P(H), so we can make useful predictions.)
It is true that in the long run much evidence will eventually dominate any prior. But summarizing this as “log odds”, for instance is only useful for talking about comparing two specific hypotheses, not “this hypothesis” and “everything else”.
But I still have objections to most you say.
You’ve given an essentially operational definition of “evidence for” in terms of operations that can’t be done.
Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional “something not on this list” hypothesis that covers everything else.
Yes. The standard way to express that is that you can’t actually work with P(Hypothesis), only P(Hypothesis | Model Space).
You can then, of course expand your model spaces, if you find your model space is inadequate.
You appear to be thinking in terms of ad-hoc statistical techniques (“computable”,
“Computable” is hardly ad-hoc. It’s a fundamental restriction on how it is possible to reason.
we want to know what’s true and how to think,
If you want to know how to think, you had better pick a method that’s actually possible.
This really is just another facet of “all Bayesian probabilities are conditional.”
Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses
Yes, of course. The point is that if you’re using probability theory to actually reason, and not merely to set up a toy statistical model such as might appear in a scientific paper, you will in fact already be “considering” all possible hypotheses, not merely a small important-looking subset. Now it’s true that what you won’t be doing is enumerating every possible hypothesis on the most fine-grained level of description, and then computing the information-theoretic complexity of each one to determine its prior—since, as you point out, that’s computationally intractable. Instead, you’ll take your important-looking subset just as you would in the science paper, let’s say H1, H2, and H3, but then add to that another hypothesis H4, which represents the whole rest of hypothesis-space, or in other words “something I didn’t think of”/”my paradigm is wrong”/etc. And you have to assign a nonzero probability to H4.
Yes. The standard way to express that is that you can’t actually work with P(Hypothesis), only P(Hypothesis | Model Space).
No, see above. In science papers, “paradigm shifts” happen, and you “change your model space”. Not in abstract Bayesianism. In abstract Bayesianism, low-probability events happen, and you update accordingly. The result will look similar to “changing your model space”, because what happens is that when H4 turns out to be true (i.e. its probability is raised to something high), you then to start to carve up the H4 region of hypothesis-space more finely and incorporate these “new” sub-hypotheses into your “important-looking subset”.
To return to the issue at hand in this thread, here’s what’s going on as I see it: physicists, acting as Bayesians, have assigned very low probablity to psi being true given QFT, and they have assigned a very high probability to QFT. In so doing, they’ve already considered the possibility that psi may be consistent with QFT, and judged this possibility to be of near-negligible probability. That was done in the first step, where they said “P(psi|QFT) is small”. It doesn’t do to reply “well, their paradigm may be wrong”; yes, it may, but if you think the probability of that is higher than they do, then you have to confront their analysis. Sean Carroll’s post is a defense of the proposition that “P(psi|QFT) is small”; Jack’s comment is an assertion that “psi&QFT may be true”, which sounds like an assertion that “P(psi|QFT) is higher than Sean Carroll thinks it is”—in which case Jack would need to account somehow for Carroll being mistaken in his analysis.
“P(psi|QFT) is higher than Sean Carroll thinks it is”—in which case Jack would need to account somehow for Carroll being mistaken in his analysis.
This is basically my position. ETA: I may assign a high probability to “not all of the hypotheses that make up QFT are true” a position I believe I can hold while not disputing the experimental evidence supporting QFT (though such evidence does decrease the probability of any part of QFT being wrong).
I don’t think Carroll’s analysis comes close to showing that P(psi|QFT) is 1 in a billion. He took one case, a psychokinesis claim that no one in parapsychology endorses and showed how it was impossible given one interpretation of what the claim might mean. We can’t look at his analysis and take it as convincing evidence that the claims of parapsychologists aren’t consistent with QFT since Carroll doesn’t once mention any of the claims made by parapsychologists!
Now there are some studies purporting to show psychokinesis (though they are less convincing than the precognition studies and actually might just be a kind of precognition). Even in these cases no one in parapsychology thinks the perturbations are the result of EM or gravitational fields; Carroll pointing out that they can’t shouldn’t result in us updating on anything.
I actually think a physicist might be able to write a convincing case for why the claims of parapsychologists can’t be right. I think there is a good chance I don’t grasp just how inconsistent these claims are with known physics—and that is one of the reasons why fraud/methodology problems/publication bias still dominate my probability space regarding parapsychology. But Carroll hasn’t come close to writing such a case. I think the reason you think he has is that you’re not familiar with a) the actual claims of parapsychologists or b) the various but inconclusive attempts to explain parapsychology results without contradicting the experimental evidence confirming QFT.
The worked example he provides is what physics would require to exist (a new force that is somehow of at least comparable strength to electromagnetism but that has somehow never been detected by experiments so sensitive that they would detect any new force more than a billionth the strength of gravity) for telekinesis to exist at all. And there are indeed parapsychologists who claim telekinesis is worth investigating.
It is not unreasonable for Carroll, having given a worked example of applying extremely well-understood physics to the question, to then expect parapsychologists to then apply extremely well-understood physics to their other questions. His point (as he states in the article) is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.
He doesn’t have to do the worked example for every phenomenon that parapsychology claims is worth serious investigation to make his point valid. Ignoring the existence of relevant known science is one reason parapsychology is a pseudoscience (a partial imitation) rather than science.
I could be wrong, but I think you added to this comment since I replied. Since all of my comments on the topic are getting downvoted without explanation I’ll be short.
And there are indeed parapsychologists who claim telekinesis is worth investigating.
But not spoon bending so much. In any case, being concerned about force fields is only worth while if you assume what is going on is a cause and effect, which many, maybe most of the attempts at explanation don’t.
This is really getting away from what Komponisto and I were talking about. I’m not really disputing the claim that parapsychology is a pseudo-science. I’m disputing the claim that Carroll’s analysis shows that the claims of parapsychology are fundamentally ruled out by current physics. I haven’t really thought about delineation issues regarding parapsychology.
His point is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.
But he gives no evidence that parapsychologists start from this assumption. Plenty of parapsychologists know that no force fields produced by the brain could be responsible for the effects they think they’ve found. Thats sort of their point actually.
There are lots of silly people in the field who think the results imply dualism of course—but thats precisely why it would be nice to have materialists tackle the questions.
There are no significant results from parapsychologists who are aware of physics. Instead, we have results from parapsychologists that claim statistical significance that have obviously defective experimental design and/or (usually and) turn out to be unreplicable.
That is, you describe sophisticated parapsychologists but the prominent results are from unsophisticated ones.
This isn’t someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs. Humans are still confused enough about the world that there is room for change in our current understanding of physics. There are some pretty compelling results in parapsychology, much or all of which may be due to publication bias, methodological issues or fraud. But that isn’t obviously the case, waving our hands and throwing out these words isn’t an explanation of the results. I’m going to try and make a post on this subject a priority now.
This isn’t someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs
Did you read the linked post by Sean Carroll? Parapsychologists aren’t condemned for holding a similar position to the uneducated; they’re condemned for holding a position blatantly inconsistent with quantum field theory on the strength of evidence much, much weaker than the evidence for quantum field theory. Citing a century’s worth of experimentally confirmed physical knowledge is far from hand-waving.
Humans are still confused enough about the world that there is room for change in our current understanding of physics
Again, this is explicitly addressed by Carroll. Physicists are not confused in the relevant regimes here. Strong evidence that certain highly precise models are correct has been obtained, and this constrains where we can reasonably expect future changes in our current understanding of physics.
Now, I’m not a physicist, so if I’m actually wrong about any of this, I’m willing to be corrected. But, as the saying goes, there is a time to confess ignorance, and a time to relinquish ignorance.
Physicists are not confused in the relevant regimes here.
We’re don’t know what the relevant regimes are here. Obviously human brains aren’t producing force fields that are bending spoons.
We have some experimental results. No one has any idea what they mean except it looks like something weird is happening. People are reacting to images they haven’t seen yet and we don’t have any good explanation for these results. Maybe it is fraud (with what motivation?), maybe there are methodological problems (but often no one can find any), maybe there is just publication bias (but it would have to be really high to explain the results in the precognition meta-analysis).
On the other hand, maybe our physics isn’t complete enough to explain what is going on. Maybe a complete understanding of consciousness would explain it. Maybe we’re in a simulation and our creators have added ad hoc rules that violate the laws of physics. Physics certainly rules out some explanations but Carroll certainly hasn’t shown that all but error/fraud/bias have been ruled out.
Btw, using spoon bending as the example and invoking Uri Geller is either ignorant or disingenuous of him (and I almost always love Sean Carroll). Parapsychologists more or less all recognize Geller as a fraud and an embarrassment and only the kookiest would claim that humans can bend spoons with their minds. Real parapsychological experiments are nothing like that.
I suspect it will be difficult to communicate why fraud, method error and publication bias are difficult explanations for me to accept if you aren’t familiar with the results of the field. I recommend Outside the Gates of Science if you haven’t read it yet.
It will actually be easy to communicate exactly what explanation there is for the events. Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years. He has had them do it perfectly methodologically soundly. Only now has he had a group that—through pure, random chance—happened to flip 53% heads and 47% tails. The number of students, the number of coins, the number of flips, all are large enough that this is an unlikely event—but he’s spent eight years trying to make it happen, and so happen it eventually has. Good for him!
The only problem with all of this is that the journals that we take to be sources of knowledge have this rule: anything more unlikely than x, must have some other explanation other than pure chance. This is true at first blush, but when somebody spends years trying to make pure chance spit out the result he wants, this rule fails badly. That is all that’s going on here.
Right, like I said, publication bias is a possibility. But in Honorton’s precognition meta-analysis the results were strong enough that, for them not to be significant, the ratio of unpublished studies averaging null results to published studies would have 46:1. That seems too high for me to be comfortable attributing everything to publication bias. It is this history of results, rather than Bem’s lone study, that troubles me.
Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years.
The paper … is the culmination of eight years’ work by Daryl Bem of Cornell University in Ithaca, New York.
Volunteers were told that an erotic image was going to appear on a computer screen in one of two positions, and asked to guess in advance which position that would be. The image’s eventual position was selected at random, but volunteers guessed correctly 53.1 per cent of the time.
Why do we think this means early test groups weren’t included in the study? It just sounds like it took eight years to get the large sample size he wanted.
I think that it means that early test groups weren’t included because that is the easiest way to produce the results we’re seeing.
It just sounds like it took eight years to get the large sample size he wanted.
Why eight years? Did he decide that eight years ago, before beginning to collect data? Or did he run tests until he got the data he wanted, then check how long it had taken? I am reasonably certain that if he got p-value significant results 4 years into this study, he would have stopped the tests and published a paper, saying “I took 4 years to make sure the sample size was large enough.”
Looking at the actually study it seems to include the results of quite a few different experiments. If he either excluded early tests or continued testing until he got the results he wanted that would obviously make the study useless but we can’t just assume that is what happened. Yes it is likely relative to the likelihood of psi, but since finding out what happened isn’t that hard it seems silly just to assume.
For reasons like those you listed, and also out of some unverbalized frustration, in the last week I’ve been thinking pretty seriously whether I should leave LW and start hanging out somewhere else online. I’m not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.
What other places on the Net are there for someone like me? Hacker News and Reddit look like dumbed-down versions of LW, so let’s not talk about those. I solved a good bit of Project Euler once, the place is tremendously enjoyable but quite narrow-focused. The n-Category Cafe is, sadly, coming to a halt. Math Overflow looks wonderful and this question by Scott Aaronson nearly convinced me to drop everything and move there permanently. The Polymath blog is another fascinating place that is so high above LW that I feel completely underqualified to join. Unfortunately, none of these are really conducive to posting new results, and moving into academia IRL is not something I’d like to do (I’ve been there, thanks).
Any other links? Any advice? And please, please, nobody take this comment as a denigration of LW or a foot-stomping threat. I love you all.
My new blog “Azimuth” may not be mathy enough for you, but if you like the n-Category Cafe, it’s possible you may like this one too. It’s more focused on technology, environmental issues, and the future. Someday soon you’ll see an interview with Eliezer! And at some point we’ll probably get into decision theory as applied to real-world problems. We haven’t yet.
(I don’t think the n-Category Cafe is “coming to a halt”, just slowing down—my change in interests means I’m posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)
The markup syntax here is a bit unusual and annoying—click the “Help” button at the bottom right of the edit window to get guidance on how to include hyperlinks. Unlike every other hyperlinking system, the text goes first and the URL second!
Make a top level post about the kind of thing you want to talk about. It doesn’t have to be an essay, it could just be a question (“Ask Less Wrong”) or a suggested topic of conversation.
I love your posts, so having seen this comment I’m going to try to write up my nascent sequence on memetic colds, aka sucker shoots, just for you. (And everyone.)
I’m not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.
Same for me. My interests are more similar to your interests than to classic LW themes. There are probably many others here in the same situation. But I hope that the list of classic LW themes is not set in stone. I think people like us should try to broaden the spectrum of LW. If this attempt fails, please send me the address of the new place where you hang out online. :) But I am optimistic.
“Leaving” LW is rather strong. Would that mean not posting? Not reading the posts, or the comments? Or just reading at a low enough frequency that you decouple your sense of identity from LW?
I’ve been trying to decide how best to pump new life into The Octagon section of the webcomic collective forum Koala Wallop. The Octagon started off when Dresden Codak was there, and became the place for intellectual discussion and debate. The density of math and computer theoretic enthusiasts is an order of magnitude lower than here or the other places you mentioned, and those who know such stuff well are LW lurkers or posters too. There was an overkill of politics on The Octagon, the levels of expertise on subjects are all over the spectrum, and it’s been slowing down for a while, but I think a good push will revive it. The main thing is that it lives inside of a larger forum, which is a silly, fun sort of community. The subforum simply has a life of it’s own.
Not that I claim any ownership over it, but:
I’m going to try to more clearly brand it as “A friendly place to analytically discuss fantastic, strange or bizarre ideas.”
At least as far as math is concerned, people not in academia can publish papers. As for the Polymath blog, I’d actually estimate that you are at about the level of most Polymath contributors, although most of the impressive work there seems to be done by a small fraction of the people there.
I have no fetish for publishing papers or having an impressive CV or whatever. The important things, for me, are these: I want to have meaningful discussions about my areas of interest, and I want my results to be useful to somebody. I have received more than a fair share of “thank yous” here on LW for clearing up mathy stuff, but it feels like I could be more useful… somewhere.
I found this amusing because by those standards, cults are everywhere. For example, I run a professional Magic: The Gathering team and am pretty sure I’m not a cult leader. Although that does sound kind of neat. Observe:
Eileen Barker:
When events are close we spend a lot of time socially seperate from others so as to develop and protect our research. On occasion ‘Magic colonies’ form for a few weeks. It’s not substantially less isolating than what SIAI dos. Check.
I have imparted huge amounts of belief about a large subset of our world, albeit a smaller one than Eliezer is working on. Partial Check.
I make reasonably import, on the level of the Cryonics decision if Cryonics isn’t worthwhile, decisions for my teammates and do what I need to do to make sure they follow them far more than they would without me. Check.
We identify other teams as ‘them’ reasonably often, and certain other groups are certainly viewed as the enemy. Check.
Nope, even fainter argument than Eliezer.
Again, yes, obviously.
Shirley Harrison:
I claim a special mission that I am uniquely qualified to fufill. Not as important of one, but still. Check.
My writings count at least as much as the sequences. Check.
Not intentionally, but often new recruits have little idea what to expect. Check plus.
Totalitarian rules structure, and those who game too much often alienate friends and family. I’ve seen it many times, and far less of a cheat than saying that you’ll be alienated from them when they are all dead and you’re not because you got frozen. Check.
I make people believe what I want with the exact same techniques we use here. If anything, I’m willing to use slightly darker arts. Check.
We make the lower level people do the grunt work, sure. Check.
Based on some of the deals I’ve made, one looking to demonize could make a weak claim. Check plus.
Exclusivity. In spades. Check.
I’d also note that the exercise left to the reader is much harder, because the other checklists are far harder to fudge.
the “Sequences” are quite long and anyone who tries to argue is told to “read the Sequences”. Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.
I have to disagree that this “smugness” even remotely reaches the level that is characteristic of a cult.
As someone who has frequently expressed disagreement with the “doctrine” here,
I have occasionally encountered both reactions that you mention. But those sporadic reactions are not much of a barrier to criticism—any critic who persists here will eventually be engaged intelligently and respectfully, assuming that the critic tries to achieve a modicum of respect and intelligence on his own part. Furthermore, if the critic really engages with what his interlocutors here are saying, he will receive enough upvotes to more than repair the initial damage to his karma
Yes. LessWrong is not in fact hidebound by groupthink. I have lots of disagreement with the standard LessWrong belief cluster, but I get upvotes if I bother to write well, explain my objections clearly and show with my reference links that I have some understanding of what I’m objecting to. So the moderation system—“vote up things you want more of”—works really well, and I like the comments here.
This has also helped me control my unfortunate case of asshole personality disorder elsewhere I see someone being wrong on the Internet. It’s amazing what you can get away with if you show your references.
This would be easier to parse if you quoted the individual criteria you are evaluating right before the evaluation, eg:
1.
A movement that separates itself from society, either geographically or socially;
It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially—note for example the many instances of folks worrying they will not be able to find a sufficiently “rationalist” significant other.
I think it would be more accurate to say that anyone who after reading the sequences still disagrees, but is unable to explain where they believe the sequences have gone wrong, is not worth arguing with.
With this qualification, it no longer seems like evidence of being cult.
What are the scenarios where someone unfamiliar with this website would hear about Roko’s deleted post?
I suppose it could be written about dramatically (because it was dramatic!) but I don’t think anyone is going to publish such an account. It was bad from the perspective of most LWers—a heuristic against censorship is a good heuristic.
This whole thing is ultimately a meta discussion about moderation policy. Why should this discussion about banned topics be that much interesting than a post on Hacker News that is marked as dead? Hacker News generally doesn’t allow discussion of why stories were marked dead. The moderators are anonymous and have unquestioned authority.
If Less Wrong had a mark as dead function (on HN unregistered users don’t see dead stories, but registered users can opt-in to see them), I suspect Eliezer would have killed Roko’s post instead of deleting it to avoid the concerns of censorship, but no one has written that LW feature yet.
As a solid example of what a not-PR disaster it was, I doubt that anyone at the Singularity Summit that isn’t a regular Less Wrong reader (the majority of attendees) has heard that Eliezer deleted a post. It’s just not the kind of thing that actually makes a PR disaster… honestly if this was a PR issue it might be a net positive because it would lead some people to hear of LW that otherwise would never have heard of Less Wrong. Please don’t take that as a reason to make this a PR issue.
Eliezer succeeded in the sense that it is very unlikely that people in the future on Less Wrong are going to make stupid emotionally abhorrent posts about weird decision theory torture scenarios. He failed in that he could have handled the situation better.
If anyone would like to continue talking about Less Wrong moderation policy, the place to talk about it is the Meta Thread (though you’d probably want to make a new one (good for +[20,50] karma!) instead of discussing it in an out of season thread)
As someone who had over 20 points of karma obliterated for reasons I don’t fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial. I still don’t really know what happened. Of course I have vague theories , and I’ve received a terse and unhelpful response from EY (a link to a horror story about a “riddle” which kills—a good story which I simply don’t accept as a useful parable of reality), but nothing clear. I do not think that I have anything of outstanding value to offer this community, but I suspect that Roko, little I, and the half-dozen others like us which probably exist, are a net loss to the community if driven away, especially if not being seen as cultlike is valuable.
As someone who had over 20 points of karma obliterated for reasons I don’t fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial.
I believe you lost 20 karma because you had 2 net downvotes on your post at the time it was deleted (and those votes still affect your total karma, although the post cannot be further upvoted or downvoted). The loss of karma did not result directly from the deletion of the post, except for the fact that the deletion froze the post’s karma at the level it was at when it was deleted.
I only looked briefly at your post, don’t remember very much about it, and am only one reader here, but from what I recall, your post did not seem so obviously good that it would have recovered from those two downvotes. Indeed, my impression is that it’s more probable that if the post had been left up longer, it would have been even more severely downvoted than it was at the time of deletion, as is the case with the many people’s first posts. I’m not very confident about that, but there certainly would have been that risk.
All that being said, I can understand if you would rather have taken the risk of an even greater hit to karma if it would have meant that people were able to read and comment on your post. I can also sympathize with your desire for a clearer moderation policy, although unless EY chose to participate in the discussion, I don’t think clearer standards would emerge, because it’s ultimately EY’s call whether to delete a post or comment. (I think there are a couple others with moderation powers, but it’s my understanding that they would not independently delete a non-troll/spam post).
I think it was 30 karma points (3 net downvotes), though I’m not sure. And I believe that it is entirely possible that some of those downvotes (more than 3, because I had at least 3 upvotes) were for alleged danger, not for lack of quality. Most importantly, if the post hadn’t been deleted, I could have read the comments which presumably would have given me some indication of the reason for those downvotes.
I am a relative newbie commenter here, and my interest in this site has so far been limited to using it as a fun forum where it’s possible to discuss all kinds of sundry topics with exceptionally smart people. However, I have read a large part of the background sequences, and I’m familiar with the main issues of concern here, so even though it might sound impertinent coming from someone without any status in this community, I can’t resist commenting on this article.
To put it bluntly, I think the main point of the article is, if anything, an understatement. Let me speak from personal experience. From the perspective of this community, I am a sort of person who should be exceptionally easy to get interested and won over to its cause, considering both my intellectual background and my extreme openness to contrarian viewpoints and skepticism towards the official academic respectability as a criteron of truth and intellectual soundness. Yet, to be honest, even though I find a lot of the writing and discussion here extremely interesting, and the writings of Yudkowsky (in addition to others such as Bostrom, Hanson, etc.) have convinced me that technology-related existential risks should be taken much more seriously than they presently are, I still keep encountering things in this community that set off various red flags, which are undoubtedly taken by many people as a sign of weirdness and crackpottery, and thus alienate huge numbers of potential quality audience.
Probably the worst such example I’ve seen was the recent disturbance in which Roko was subjected to abuse that made him leave. When I read the subsequent discussions, it surprised me that virtually nobody here appears to be aware what an extreme PR disaster it was. Honestly, for someone unfamiliar with this website who has read about that episode, it would be irrational not to conclude that there’s some loony cult thing going on here, unless he’s also presented with enormous amounts of evidence to the contrary in the form of a selection of the best stuff that this site has to offer. After these events, I myself wondered whether I want to be associated with an outlet where such things happen, even just as an occasional commenter. (And not to even mention that Roko’s departure is an enormous PR loss in its own right, in that he was one of the few people here who know how to write in a way that’s interesting and appealing to people who aren’t hard-core insiders.)
Even besides this major PR fail, I see many statements and arguments here that may be true, or at least not outright unreasonable, but should definitely be worded more cautiously and diplomatically if they’re given openly for the whole world to see. I’m not going to get into details of concrete examples—in particular, I do not concur unconditionally with any of the specific complaints from the above article—but I really can’t help but conclude that lots of people here, including some of the most prominent individuals, seem oblivious as to how broader audiences, even all kinds of very smart, knowledgeable, and open-minded people, will perceive what they write and say. If you want to have a closed inner circle where specific background knowledge and attitudes can be presumed, that’s fine—but if you set up a large website attracting lots of visitors and participants to propagate your ideas, you have to follow sound PR principles, or otherwise its effect may well end up being counter-productive.
I agree completely. I still read LessWrong because I am a relatively long-time reader, and thus I know that most of the people here are sane. Otherwise, I would conclude that there is some cranky process going on here. Still, the Roko affair caused me to significantly lower my probabilities assigned to SIAI success and forced me to seriously consider the hypothesis that Eliezer Yudkowsky went crazy.
By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality, as the blog’s header proudly states, while instead the posts often discuss relatively narrow list of topics which are only tangentially related to rationality. E.g. cryonics, AI stuff, evolutionary psychology, Newcomb-like scenarios.
Part of that mission is to help people overcome the absurdity heuristic, and to help them think carefully about topics that normally trigger a knee-jerk reflex of dismissal on spurious grounds; it is in this sense that cryonics and the like are more than tangentially related to rationality.
I do agree with you that too much of the newer material keeps returning to those few habitual topics that are “superstimuli” for the heuristic. This perhaps prevents us from reaching out to newer people as effectively as we could. (Then again, as LW regulars we are biased in that we mostly look at what gets posted, when what may matter more for attracting and keeping new readers is what gets promoted.)
A site like YouAreNotSoSmart may be more effective in introducing these ideas to newcomers, to the extent that it mostly deals with run-of-the-mill topics. What makes LW valuable which YANSS lacks is constructive advice for becoming less wrong.
Thanks for the link, I haven’t known YANSS.
As for overcoming absurdity heuristics, more helpful would be to illustrate its inaproppriateness (is this a real word?) on thoughts which are seemingly absurd while having a lot of data proving them right, rather than predictions like Singularity which are mostly based on … just different heuristics.
Agreed.
One good sign here is that LW, unlike most other non-mainstream organizations, doesn’t really function like a cult. Once one person starts being critical, critics start coming out of the woodwork. I have my doubts about this place sometimes too, but it has a high density of knowledgeable and open-minded people, and I think it has a better chance than anyone of actually acknowledging and benefiting from criticism.
I’ve tended to overlook the weirder stuff around here, like the Roko feud—it got filed under “That’s confusing and doesn’t make sense” rather than “That’s an outrage.” But maybe it would be more constructive to change that attitude.
Singularitirianism, transumanism, cryonics, etc probably qualify as cults under at least some of the meanings of the term: http://en.wikipedia.org/wiki/Cult Cults do not necessarily lack critics.
The wikipedia page on Cult Checklists includes seven independent sets of criteria for cult classification, provided by anti-cult activists who have strong incentives to cast as wide a net as possible. Singularitarianism, transhumanism, and cryonics fit none of those of lists. In most cases, it isn’t even close.
I disagree with your assessment. Let’s just look at Lw for starters.
Eileen Barker:
It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially—note for example the many instances of folks worrying they will not be able to find a sufficiently “rationalist” significant other.
Huge portions of the views of reality of many people here have been shaped by this community, and Eliezer’s posts in particular; many of those people cannot understand the math or argumentation involved but trust Eliezer’s conclusions nonetheless.
Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.
Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.
Nope. Though some would credit Eliezer with trying to become or create God.
Obviously. Less Wrong is quite focused on rationality (though that should not be odd) and Eliezer is rather… driven in his own overarching goal.
Based on that, I think Eileen Barker’s list would have us believe Lw is a likely cult.
Shirley Harrison:
I’m not sure if ‘from above’ qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.
While ‘revealed’ is not necessarily accurate in some senses, the “Sequences” are quite long and anyone who tries to argue is told to “read the Sequences”. Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.
Nope
Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.
This one is questionable. But surely Eliezer is trying the advanced technique of sharing part of his power so that we will begin to see the world the way he does.
There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.
No sign of this
“Exclusivity - ‘we are right and everyone else is wrong’”. Very yes.
Based on that, I think Shirley Harrison’s list would have us believe Lw is a likely cult.
Similar analysis using the other lists is left as an exercise for the reader.
On Eileen Barker:
I believe that most LW posters are not signed up for cryonics (myself included), and there is substantial disagreement about whether it’s a good idea. And that disagreement has been well received by the “cult”, judging by the karma scores involved.
Theism has been discussed. It is wrong. But Robert Aumann’s work is still considered very important; theists are hardly dismissed as “satanic,” to use Barker’s word.
Of Barker’s criteria, 2-4 of 6 apply to the LessWrong community, and only one (“Leaders and movements who are unequivocally focused on achieving a certain goal”) applies strongly.
On Shirley Harrison:
I can’t speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.
No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.
What you describe is a prosperous exaggeration, not “[t]otalitarianism and alienation of members from their families and/or friends.”
Any person who promotes a charity at which they work is pushing a cult, by this interpretation. Eliezer isn’t “lining his own pockets”; if someone digs up the numbers, I’ll donate $50 to a charity of your choice if it turns out that SIAI pays him a salary disproportionally greater (2 sigmas?) than the average for researchers at comparable non-profits.
So that’s 2-6 of Harrison’s checklist items for LessWrong, none of them particularly strong.
My filters would drop LessWrong in the “probably not a cult” category, based off of those two standards.
Eliezer was compensated $88,610 in 2008 according to the Form 990 filed with the IRS and which I downloaded from GuideStar.
Wikipedia tells me that the median 2009 income in Redwood where Eliezer lives is $69,000.
(If you are curious, Tyler Emerson in Sunnyvale (median income 88.2k) makes 60k; Susan Fonseca-Klein also in Redwood was paid 37k. Total employee expenses is 200k, but the three salaries are 185k; I don’t know what accounts for the difference. The form doesn’t seem to say.)
What exactly are Eliezer’s qualifications supposed to be?
You mean, “What are Eliezer’s qualifications?” Phrasing it that way makes it sound like a rhetorical attack rather than a question.
To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.
I’m definitely not trying to attack anyone (and you’re right my comment could be read that way). But I’m also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He’s probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn’t expect any one person to solve a Hard problem. Insight isn’t that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.
No one looks at open problems in other fields this way.
Yes, the situation isn’t normal or good. But this isn’t a balanced comparison, since we don’t currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.
I may have an incorrect impression, but SIAI or at least Eliezer’s department seems to have a self-image comparable to the Manhattan project rather than early pioneers of a scientific field.
Eliezer’s past remarks seem to have pointed to a self-image comparable to the Manhatten project. However, according the new SIAI Overview:
They want to become comparable to the Manhattan project, in part by recruiting additional FAI researchers. They do not claim to be at that stage now.
Eliezer has said: “I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me.” Your call as to whether you believe that. (The rest of that post, and some of his other posts in that discussion, address some points similar to those that you raised.)
That said, “self-image comparable to the Manhattan project” is an unusually generous ascription of humility to SIAI and Eliezer. :P
I haven’t seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.
Which statement are you talking about? Saying someone is the most likely person to do something is not the same as saying they are likely to do it. You haven’t said anything in this comment than I disagree with so I don’t understand what we’re disputing.
Great comment.
How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky’s publications and all of the LW sequences? You could argue that he and other people don’t have the smarts to grasp Yudkowsky’s arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?
The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.
The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn’t have to be free of people who disagree with it to be influential, and it doesn’t even have to be correct.
Level up first. I can’t evaluate physics research, so I just accept that I can’t tell which of it is correct; I don’t try to figure it out from the politics of physicists arguing with each other, because that doesn’t work.
But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.
Would you tell a politician to go and read the sequences and if, after reading the publications, they don’t see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?
You talked about Yudkowsky’s influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don’t think they influenced the right people.
Downvoted for this:
Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu’s comment is born of motivated cognition to a greater extent than your own comments.
Moreover, I believe that even when such statements are true, one should avoid making them when possible as they’re easily construed as personal attacks which tend to spawn an emotional reaction in one’s conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you’re going wrong, you won’t improve.
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
On this blog, any person should definitely be resisting this push.
I did not say that one should avoid telling people when and where they’re going wrong. I was objecting to the practice of questioning people’s motivations. For the most part I don’t think that questioning somebody’s motivations is helpful to him or her.
I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn’t mean that the commentators are always above this sort of thing.
I agree with you insofar as I think that one work to interpret comments charitably.
I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
Not questioning their motivations; you objected to the practice of pointing out motivated cognition:
Pointing out that someone hasn’t thought through the issue because they are motivated not to—this is not an attack on their motivations; it is an attack on their not having thought through the issue. Allowing people to keep their motivated cognitions out of respect for their motivations is wrong, because it doesn’t let them know that they have something wrong, and they miss a chance to improve it.
To paraphrase steven, if you’re interested in winning disputes you should dismiss personal attacks, but if you’re interested in the truth you should dig through their personal attacks for any possible actual arguments. Whether or not it’s a personal attack, you ought to construe it as if it is not, in order to maximise your chances of finding truth.
Agreed. I think the first two parts of our comments address whether one should exert such a push. I think you’re right, and this whole third part of our discussion is irrelevant.
It’s quite possible to be inaccurate about other people’s motivations, and if you are, then they will have another reason to dismiss your argument.
How do you identify motivated cognition in other people?
Not thinking something through could be habitual sloppiness, repeating what one has heard many times, or not thinking that a question is worthy of much mental energy rather than a strong desire for a particular conclusion. (Not intended as a complete list.)
Making a highly specific deduction from an absence rather than a presence strikes me as especially likely to go wrong.
Some of the same ways I see it in myself. Specifically, when dealing with others:
Opposed to easy (especially quick or instant) tests: strong evidence of motivated stopping.
All for difficult (especially currently-impossible) tests: moderate evidence of motivated continuing.
Waiting on results of specific test to reconsider or take a position: moderate evidence of motivated continuing.
Seemingly-obvious third alternative: very strong evidence of motivated stopping. Caveat! this one is problematic. It is very possible to miss third alternatives.
Opposed to plausible third alternatives: weak evidence of motivated stopping—strong evidence with a caveat and split, as “arguments as soldiers” can also produce this effect. Mild caveat on plausibility being somewhat subjective.
In the case of XiXiDu’s comment, focusing on Ben Goertzel’s rejection is an example of waiting on results from a specific test. That is enough evidence to locate the motivated continuing hypothesis¹ - ie, that XiXiDu does not want to accept the current best-or-accepted-by-the-community answer.
The questions XiXiDu posed afterwards seem to have obvious alternative answers, which suggests motivated stopping. He seems to be stopping on “Something’s fishy about Eliezer’s setup”.
¹: As well as “Goertzel is significantly ahead of AI development curve”, “AGI research and development is a field with rigid formal rules on what does and doesn’t convince people”—the first is easily tested by looking at Ben’s other views, the second is refuted by many researchers in that field
I recommend explaining that sort of thing when you say someone is engaging in motivated cognition.
I think it seems more like a discussable matter then and less like an insult.
Thanks for engaging with me; I now better understand where jimrandomh might have been coming from. I fully agree with Nancy Lebovitz here.
Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer’s stance.
For what it’s worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He’s also responsible for coining “Seed AI”.
Indeed, I was just trying to figure out how someone with money or power who wants to know what is the right thing to do but who does not have the smarts should do. Someone like a politician or billionaire who would either like to support some AGI research or the SIAI. How are they going to decide what to do if all AGI experts tell them that there is no risk from AGI research and that the SIAI is a cult when at the same time the SIAI tells them the AGI experts are intellectual impotent and the SIAI is the only hope for humanity to survive the AI revolution. What should someone who does not have the expertise or smarts to estimate those claims, but who nevertheless has to decide how to use his power, should do? I believe this is not an unrealistic scenario as many rich or powerful people want to do the right thing, yet do not have the smarts to see why they should trust Yudkowsky instead of hundreds of experts.
Interesting, when did he come up with the concept of “Seed AI”. Because it is mentioned in Karl Schroeder’s Ventus (Tor Books, 2000.) ISBN 978-0312871970.
Didn’t find the phrase “Seed AI” there. One plot element is a “resurrection seed”, which is created by an existing, mature evil AI to grow itself back together in case it’s main manifestation is destroyed. A Seed AI is a different concept, it’s something the pre-AI engineers put together that grows into a superhuman AI by rewriting itself more and more powerful. A Seed AI is specifically a method to get to AGI from not having one, not just an AI that grows from a seed-like thing. I don’t remember recursive self-improvement being mentioned with the seed in Ventus.
A precursor concept where the initial AI bootstraps itself by merely learning things, not necessarily by rewriting it’s own architecture, goes all the way back to Alan Turing’s 1950 paper on machine intelligence.
Here is a quote from Ventus:
[...]
...and here’s a quote from I.J. Good, from 1965:
He didn’t coin the term “Seed AI” either.
Yes, but I believe it is a bit weird for a Wikipedia article to state that someone is the originator of the Seed AI theory when he just coined the term. I wasn’t disputing anything, just trying to figure out if it is actually the case that Yudkowsky came up with the concept in the first place.
Not the concept—the term.
“Seed AI theory” probably refers to something or another in here—which did indeed originate with Yu’El.
Presumably http://en.wikipedia.org/wiki/Seed_AI should be considered to be largely SIAI marketing material.
It is further explained that the Winds were designed to evolve on their own so they are not mere puppets of human intentions but possess their own intrinsic architecture.
In other places in the book it is explained how humans did not create their AI Gods but that they evolved themselves from seeds designed by humans.
The Winds are seed AI, in the sense provided by Yudkowsky.
ETA
Well, of course I just tried to figure out of Yudkowsky invented cheesecake and not just some special recipe of cheesecake.
I don’t think the failure of someone to be convinced of some position is ever strong evidence against that position. But this argument here is genuinely terrible. I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?
If someone is unable to examine the available evidence and come to a sane conclusion on a particular topic, this makes it less likely that they are able to examine the available evidence and to sane conclusions on other topics.
I don’t take Goertzel seriously for the same reason I don’t take young earth creationists seriously. It’s not that I disagree with him, it’s that his beliefs have almost no connection to reality.
(If it makes you feel better, I have read some of Goertzel’s writing on AGI, and it’s stuffed full of magical thinking.)
I’d be interested to hear more about that.
From Ten Years to a Positive Singularity:
and
From The Singularity Institute’s Scary Idea (And Why I Don’t Buy It):
From Chance and Consciousness:
And pretty much all of On the Algebraic Structure of Consciousness and Evolutionary Quantum Computation.
This is all just from fifteen minutes of looking around his website. I’m amazed anyone takes him seriously.
Oh...
wow.
I think that paper alone proves your point quite nicely.
I mostly disagree with Ben, but I don’t think judging him based on that paper is fair. It’s pretty bad, but it was also written in 1996. Fourteen years is a lot of time to improve as a thinker.
I had that thought too, and I was thinking of retracting or amending my comment to that effect, but looking at some of his later publications in the same journal(?) suggests that he hasn’t leveled up much since then.
“The Futility Of Emergence” really annoys me. It’s a perfectly useful word. It’s a statement about the map rather than about the territory, but it’s a useful one. Whereas magic means “unknowable unknowns”, emergent means “known unknowns”—the stuff that we know follows, we just don’t know how.
e.g. Chemistry is an emergent property of the Schrodinger equation, but calculating anything useful from that is barely in our grasp. So we just go with the abstraction we know, and they’re separate sciences. But we do know we have that work to do.
Just linking to that essay every time someone you’re disagreeing with says “emergent” is difficult to distinguish from applause lights.
Saying the word “emergent” adds nothing. You’re right that it’s not as bad as calling something magic and declaring that it’s inherently unknowable, but it also offers zero explanatory power. To reword your example:
There is absolutely no difference in meaning when you take the word “emergent” out. That’s why it isn’t useful, which Eliezer was pointing out.
Nitpick: I don’t think that is exactly what EY was pointing out. Take a look at the comments and the general response of “Huh? Who makes that mistake?” It seems EY was complaining about the tendency of AGI researchers to use “emergence” as if it were an explanation, not ordinary use of the word that doesn’t pretend it is one but just, say, points out that the behavior is surprising given what it’s composed of, or that your current methods aren’t powerful enough to predict the consequences. He didn’t seem to have realized that particular mistake was mostly localized to AGI people.
It seems more likely that when the cited people said “intelligence is an emergent phenomenon”, they were misunderstood as proposing that as a satisfactory explanation of the phenomenon.
Nitpick accepted.
I’m not entirely sure this is correct. I wouldn’t call the trajectories of planets and galaxies “properties” of Relativity, but I would call it emergent behavior due to Relativity. It’s a stylistic and grammatical choice, like when to use “which” and when to use “that.” They may seem the same to the uninitiated, but there’s a difference and the initiated can tell when you’re doing it wrong.
So, I agree with David Gerard that trying to eradicate the use of the word is misplaced. It’d be like saying “the word ‘which’ is obsolete, we’re only going to use ‘that’ and look down on anyone still using ‘which’.” You lose far more by such a policy than you gain.
IIRC, that post was adequately dismantled in its comments.
From what I’ve seen, the people who comment here who have read Broderick’s book have come away, if not convinced psy describes some real physical phenomena, convinced that the case isn’t at all open and shut the way young earth creationism is. When an issue is such that smart, sane people can disagree then you have to actually resolve the object level disagreement before you can use someone’s beliefs on the issue in a general argument about their rationality. You can’t just assume it as you do here.
Yes, here WrongBot is safe to assume basic physics.
Edit for the sake of technical completeness: And biology.
Goertzel’s paper on the subject is about extending the de Broglie Bohm pilot wave theory in a way that accounts for psi while being totally consistent with all known physics. Maybe it is nonsense, I haven’t read it. But you can’t assume it is.
I disagree. I do not need to (and should not) discard my priors when evaluating claims.
It would be an error in reasoning on my part if I did not account for the low prior (to reading it) probability of a psyonics theory being sane when evaluating the proponents other claims. For emphasis: not lowering my confidence in Goertzel’s other beliefs because he is a proponent of psi without me having read his paper would be an outright mistake.
I also note that you defending Goertzel on the psi point is evidence against Goertzel’s beliefs regarding AI. Extremely weak evidence.
Huh?
I mean what is written in the straightforward English sense. I mention it to emphasize that all evidence counts.
Could you unpack your reasoning? Do you mean that Jack defending Goertzel on psi discredits defense of Goertzel on AI because it shows such defense to be less correlated to the validity of the opinion than previously thought? Or did you drop a negation or something and mean the opposite of what you wrote, because Jack defending Goertzel on psi is very slight evidence of Goertzel’s opinion on psi not being as crazy as you previously thought?
Ever is a strong word. If a competent expert in a field who has a known tendency to err slightly on the side of too much openness to the cutting edge fails to be convinced by a new finding within his field that says an awful lot.
That is simply not the form of the argument you quote. “Ben Goertzel believes in psychic phenomenon” can not be represented as “I disagree with person x ”.
I’m being generous and giving the original comment credit for an implicit premise. As stated the argument is “Person x believes y, therefore person x is wrong about z.” this is so obviously wrong it makes my head hurt. WrongBot’s point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn’t provide any evidence to that effect it reduces to ‘I disagree with Goertzel about psy’.
Fair point re: “ever”.
I generally don’t try to provide evidence for every single thing I say, and I am especially lax about things that I consider to be incredibly obvious.
But I’m annoyed enough to lay out a very brief summary of why belief in PSI is ludicrous:
It isn’t permitted by known physics.
There are no suggested mechanisms (so far as I’m aware) for PSI which do not contradict proven physical laws.
The most credible studies which claim to demonstrate PSI have tiny effect sizes, and those haven’t been replicated with larger sample sizes.
Publication bias.
PSI researchers often seem to possess motivated cognition.
We’ve analyzed the functioning of individual neurons pretty closely. If there are quantum microtubules or other pseudoscientific nonsense in them, they don’t seem to affect how those individual neurons behave.
Etc.
No one has to give evidence for everything they say but when things that you thought were obviously wrong begin to get defended by physics-literate reductionist materialists that seems like a good time to lower your confidence.
Well to begin with, Goertzel’s paper claims to be such a mechanism. Have you read it? I don’t know if it works or not. Seems unwise to assume it doesn’t though.
Publication bias, motivated cognition and effect size are all concerns and were my previous explanation. I found this meta-analysis upset that view for me.
Oh man! I left out the most important objection!
If PSI exploits weird physics in a complicated manner and produces such tiny effects, where the hell did the mechanism come from? PSI would obviously be a very useful adaptation, so why don’t we see it in other species? Why aren’t the effects stronger, since there’s such a strong evolutionary pressure in favor of them?
Goertzel’s paper also includes psychokinesis as a PSI phenomenon supported by strong evidence. I would love to see the study he’s talking about for that one. Or a video.
All of this is also discussed in Outside the Gates. I can try to dig up what he said this weekend.
The experiments aren’t macroscopic. The results involve statistical deviations from expected normal distributions of say, white noise generators when participants try to will the results in different directions. I don’t think these results are nearly as compelling as other things, see Jahn and Dunne 2005 for example. They had some methodological issues and the one attempt that was made at replication, while positive, wasn’t significant at anywhere near the level of the original.
If you’re actually interested you should consider checking out the book. It is a quick, inexpensive read. Put it this way: I’m not some troll who showed up here to argue about parapsychology. Six months ago I was arguing your position here with someone else and they convinced me to check out the book. I then updated significantly in the direction favoring psi (not enough to say it exists more likely than not, though). Everything you’ve said is exactly what I was saying before. It turns out that there are sound responses to a lot of the obvious objections, making the issue not nearly as clear cut as I thought.
It would be wrong if it were a logical deduction instead of an inference. That is, if WrongBot actually wrote ‘therefore’ or otherwise signaled absolute deductive certainty then he would be mistaken. As is he presents it as evidence, which it in fact is.
There is a clear implied premise ‘psychic phenomenon are well known to be bullshit’. Not all baseline premises must be supported in an argument. Instead, the argument should be considered stronger or weaker depending on how reliable the premises are. I don’t think WrongBot loses too much credibility in this case by dismissing psychic phenomenon.
It isn’t even evidence until you include a premise about the likelihood of y, which we agree is the implied premise.
I think I’m just restating the exchange I had with komponisto on this point. Goertzel’s position isn’t that of someone who is doesn’t know any physics or Enlightenment-style rationality. It is clearly a contrarian position which should be treated rather differently since we can assume he is familiar with the reasons why psychic phenomena are ‘well known to be bullshit’. It is a fully generalizable tactic which can be used against all and any contrarian thinkers. Try “Robin Hanson thinks we should cut health care spending 50%, therefore he is less likely to be right about fertility rate.”
This is obviously going to be the case when trying to convince an individual of something. The beliefs (crackpot or otherwise) of the target audience are always going to be relevant to persuasively. As a comment directed in part to the wider lesswrong audience the assumed premises will be different.
If I were a reader who thought Robin’s position on health care was as implausible as belief in magic and thought that making claims about the fertility was similar to AI strategy then I would take this seriously. As it stands the analogy is completely irrelevant.
The extent to which it is fallacious depends rather strongly on what y and z (and even x) are, it seems to me.
Any argument of this nature needs to include some explanation of why someone’s ability to think about y is linked to their ability to think about z. But even with that (which wasn’t included in the comment) you can only conclude that y and z imply each other. You can’t just conclude z.
In other words, you have to show Goertzel is wrong about psychic phenomenon before you can show that his belief in it is indicative of reasoning flaws elsewhere.
I don’t disagree in principle, but psychic phenomena are pretty much fundamentally ruled out by current physics. So a person’s belief in them raises serious doubts about that person’s understanding of science at the very least, if not their general rationality level.
I got the impression from Damien Broderick’s book that a lot of PSI researchers do understand physics and aren’t postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use. There’s a story that Einstein was interested in PSI research, but declared it nonsense when the claimed results showed PSI effects that weren’t subject to the inverse square law, so this isn’t a new idea.
Damien Broderick’s attitude in his book is basically that there’s a bunch of anomalous observations and neither a satisfactory explanation or, in his opinion, a refutation for them exists. Goertzel’s attitude is to come up with a highly speculative physical theory that could explain that kind of phenomena, and which would take a bit more than “would need extra particles” to show as nonsense.
“Not understanding basic physics” doesn’t really seem to cut it in either case. “It’s been looked into by lots of people, a few of them very smart, for 80 years, and nothing conclusive has come out of it, so most likely there isn’t anything in it, and if you still want to have a go, you better start with something the smart people in 1970s didn’t have” is basically the one I’ve got.
I’m not holding my breath over the recent Bem results, since he seems to be doing pretty much the same stuff that was done in the 70s and always ended up failing one way or the other, but I’m still waiting for someone more physics-literate to have a go at Goertzel’s pilot wave paper.
“Not understanding basic physics” sounds like a harsh quasi-social criticism, like “failing at high-school material”. But that’s not exactly what’s meant here. Rather, what’s meant is more like “not being aware of how strong the evidence against psi from 20th-century physics research is”.
The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H [EDIT: technically, this is not necessarily true, but it usually is in practice, and becomes more likely as P(H|M) approaches 0]. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.
Evidence distinguishes between not for individual models. There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.
By the Bayesian definition of evidence, “evidence for” a hypothesis (including a “model”, which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.
Carroll claims that current data implies the probability of such models being correct is near zero. So I’d like to invoke Aumann here and ask what your explanation for the disagreement is. Where is Carroll’s (and others’) mistake?
If models are just complex conjunctions of hypotheses then the evidence that confirms models will often confirm some parts of the model more than others. Thus the evidence does little to distinguish the model from a different model which incorporates slightly different hypotheses.
That is all I meant.
Yes, but this depends on what other hypotheses are considered in the “false” case.
The “false” case is the disjunction of all other possible hypotheses besides the one you’re considering.
That’s not computable. (EDIT: or even well defined). One typically works with some limited ensemble of possible hypotheses.
Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional “something not on this list” hypothesis that covers everything else
You appear to be thinking in terms of ad-hoc statistical techniques (“computable”, “one typically works...”), rather than fundamental laws governing belief. But the latter is what we’re interested in in this context: we want to know what’s true and how to think, not what we can publish and how to write it up.
Let me put it this way: excluding a hypothesis from the model space is merely the special case of setting its prior to zero. Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses, even if no hypothesis goes from possible to not or vice-versa.
As this is prior dependent, there is no objective measure of whether a hypothesis is supported or rejected by evidence.
(This is obviously true when we look at P(H_i|e). It’s a bit less so when we look at P(e|H) vs P(e|~H). This seems objective. It is objective in the case that H and ~H are atomic hypotheses with a well-defined rule for getting P(e|~H). But if ~H is an or of “all the other theories”, than P(e|~H) is dependent on the prior probabilities for each of the H_i that are the subcomponents of ~H. It’s also an utterly useless by itself for judging H. We want to know P(H|e) for that. (P(e|H) is of course why we want P(H), so we can make useful predictions.)
It is true that in the long run much evidence will eventually dominate any prior. But summarizing this as “log odds”, for instance is only useful for talking about comparing two specific hypotheses, not “this hypothesis” and “everything else”.
But I still have objections to most you say.
You’ve given an essentially operational definition of “evidence for” in terms of operations that can’t be done.
Yes. The standard way to express that is that you can’t actually work with P(Hypothesis), only P(Hypothesis | Model Space).
You can then, of course expand your model spaces, if you find your model space is inadequate.
“Computable” is hardly ad-hoc. It’s a fundamental restriction on how it is possible to reason.
If you want to know how to think, you had better pick a method that’s actually possible.
This really is just another facet of “all Bayesian probabilities are conditional.”
And you shouldn’t do that.
Yes, of course. The point is that if you’re using probability theory to actually reason, and not merely to set up a toy statistical model such as might appear in a scientific paper, you will in fact already be “considering” all possible hypotheses, not merely a small important-looking subset. Now it’s true that what you won’t be doing is enumerating every possible hypothesis on the most fine-grained level of description, and then computing the information-theoretic complexity of each one to determine its prior—since, as you point out, that’s computationally intractable. Instead, you’ll take your important-looking subset just as you would in the science paper, let’s say H1, H2, and H3, but then add to that another hypothesis H4, which represents the whole rest of hypothesis-space, or in other words “something I didn’t think of”/”my paradigm is wrong”/etc. And you have to assign a nonzero probability to H4.
No, see above. In science papers, “paradigm shifts” happen, and you “change your model space”. Not in abstract Bayesianism. In abstract Bayesianism, low-probability events happen, and you update accordingly. The result will look similar to “changing your model space”, because what happens is that when H4 turns out to be true (i.e. its probability is raised to something high), you then to start to carve up the H4 region of hypothesis-space more finely and incorporate these “new” sub-hypotheses into your “important-looking subset”.
To return to the issue at hand in this thread, here’s what’s going on as I see it: physicists, acting as Bayesians, have assigned very low probablity to psi being true given QFT, and they have assigned a very high probability to QFT. In so doing, they’ve already considered the possibility that psi may be consistent with QFT, and judged this possibility to be of near-negligible probability. That was done in the first step, where they said “P(psi|QFT) is small”. It doesn’t do to reply “well, their paradigm may be wrong”; yes, it may, but if you think the probability of that is higher than they do, then you have to confront their analysis. Sean Carroll’s post is a defense of the proposition that “P(psi|QFT) is small”; Jack’s comment is an assertion that “psi&QFT may be true”, which sounds like an assertion that “P(psi|QFT) is higher than Sean Carroll thinks it is”—in which case Jack would need to account somehow for Carroll being mistaken in his analysis.
This is basically my position. ETA: I may assign a high probability to “not all of the hypotheses that make up QFT are true” a position I believe I can hold while not disputing the experimental evidence supporting QFT (though such evidence does decrease the probability of any part of QFT being wrong).
I don’t think Carroll’s analysis comes close to showing that P(psi|QFT) is 1 in a billion. He took one case, a psychokinesis claim that no one in parapsychology endorses and showed how it was impossible given one interpretation of what the claim might mean. We can’t look at his analysis and take it as convincing evidence that the claims of parapsychologists aren’t consistent with QFT since Carroll doesn’t once mention any of the claims made by parapsychologists!
Now there are some studies purporting to show psychokinesis (though they are less convincing than the precognition studies and actually might just be a kind of precognition). Even in these cases no one in parapsychology thinks the perturbations are the result of EM or gravitational fields; Carroll pointing out that they can’t shouldn’t result in us updating on anything.
I actually think a physicist might be able to write a convincing case for why the claims of parapsychologists can’t be right. I think there is a good chance I don’t grasp just how inconsistent these claims are with known physics—and that is one of the reasons why fraud/methodology problems/publication bias still dominate my probability space regarding parapsychology. But Carroll hasn’t come close to writing such a case. I think the reason you think he has is that you’re not familiar with a) the actual claims of parapsychologists or b) the various but inconclusive attempts to explain parapsychology results without contradicting the experimental evidence confirming QFT.
The worked example he provides is what physics would require to exist (a new force that is somehow of at least comparable strength to electromagnetism but that has somehow never been detected by experiments so sensitive that they would detect any new force more than a billionth the strength of gravity) for telekinesis to exist at all. And there are indeed parapsychologists who claim telekinesis is worth investigating.
It is not unreasonable for Carroll, having given a worked example of applying extremely well-understood physics to the question, to then expect parapsychologists to then apply extremely well-understood physics to their other questions. His point (as he states in the article) is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.
He doesn’t have to do the worked example for every phenomenon that parapsychology claims is worth serious investigation to make his point valid. Ignoring the existence of relevant known science is one reason parapsychology is a pseudoscience (a partial imitation) rather than science.
I could be wrong, but I think you added to this comment since I replied. Since all of my comments on the topic are getting downvoted without explanation I’ll be short.
But not spoon bending so much. In any case, being concerned about force fields is only worth while if you assume what is going on is a cause and effect, which many, maybe most of the attempts at explanation don’t.
This is really getting away from what Komponisto and I were talking about. I’m not really disputing the claim that parapsychology is a pseudo-science. I’m disputing the claim that Carroll’s analysis shows that the claims of parapsychology are fundamentally ruled out by current physics. I haven’t really thought about delineation issues regarding parapsychology.
But he gives no evidence that parapsychologists start from this assumption. Plenty of parapsychologists know that no force fields produced by the brain could be responsible for the effects they think they’ve found. Thats sort of their point actually.
There are lots of silly people in the field who think the results imply dualism of course—but thats precisely why it would be nice to have materialists tackle the questions.
There are no significant results from parapsychologists who are aware of physics. Instead, we have results from parapsychologists that claim statistical significance that have obviously defective experimental design and/or (usually and) turn out to be unreplicable.
That is, you describe sophisticated parapsychologists but the prominent results are from unsophisticated ones.
Cite?
ETA: Bem, for example, whose study initiated this discussion has a BA and did graduate work in physics.
This isn’t someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs. Humans are still confused enough about the world that there is room for change in our current understanding of physics. There are some pretty compelling results in parapsychology, much or all of which may be due to publication bias, methodological issues or fraud. But that isn’t obviously the case, waving our hands and throwing out these words isn’t an explanation of the results. I’m going to try and make a post on this subject a priority now.
Did you read the linked post by Sean Carroll? Parapsychologists aren’t condemned for holding a similar position to the uneducated; they’re condemned for holding a position blatantly inconsistent with quantum field theory on the strength of evidence much, much weaker than the evidence for quantum field theory. Citing a century’s worth of experimentally confirmed physical knowledge is far from hand-waving.
Again, this is explicitly addressed by Carroll. Physicists are not confused in the relevant regimes here. Strong evidence that certain highly precise models are correct has been obtained, and this constrains where we can reasonably expect future changes in our current understanding of physics.
Now, I’m not a physicist, so if I’m actually wrong about any of this, I’m willing to be corrected. But, as the saying goes, there is a time to confess ignorance, and a time to relinquish ignorance.
We’re don’t know what the relevant regimes are here. Obviously human brains aren’t producing force fields that are bending spoons.
We have some experimental results. No one has any idea what they mean except it looks like something weird is happening. People are reacting to images they haven’t seen yet and we don’t have any good explanation for these results. Maybe it is fraud (with what motivation?), maybe there are methodological problems (but often no one can find any), maybe there is just publication bias (but it would have to be really high to explain the results in the precognition meta-analysis).
On the other hand, maybe our physics isn’t complete enough to explain what is going on. Maybe a complete understanding of consciousness would explain it. Maybe we’re in a simulation and our creators have added ad hoc rules that violate the laws of physics. Physics certainly rules out some explanations but Carroll certainly hasn’t shown that all but error/fraud/bias have been ruled out.
Btw, using spoon bending as the example and invoking Uri Geller is either ignorant or disingenuous of him (and I almost always love Sean Carroll). Parapsychologists more or less all recognize Geller as a fraud and an embarrassment and only the kookiest would claim that humans can bend spoons with their minds. Real parapsychological experiments are nothing like that.
I suspect it will be difficult to communicate why fraud, method error and publication bias are difficult explanations for me to accept if you aren’t familiar with the results of the field. I recommend Outside the Gates of Science if you haven’t read it yet.
It will actually be easy to communicate exactly what explanation there is for the events. Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years. He has had them do it perfectly methodologically soundly. Only now has he had a group that—through pure, random chance—happened to flip 53% heads and 47% tails. The number of students, the number of coins, the number of flips, all are large enough that this is an unlikely event—but he’s spent eight years trying to make it happen, and so happen it eventually has. Good for him!
The only problem with all of this is that the journals that we take to be sources of knowledge have this rule: anything more unlikely than x, must have some other explanation other than pure chance. This is true at first blush, but when somebody spends years trying to make pure chance spit out the result he wants, this rule fails badly. That is all that’s going on here.
Right, like I said, publication bias is a possibility. But in Honorton’s precognition meta-analysis the results were strong enough that, for them not to be significant, the ratio of unpublished studies averaging null results to published studies would have 46:1. That seems too high for me to be comfortable attributing everything to publication bias. It is this history of results, rather than Bem’s lone study, that troubles me.
What evidence is there for this?
From here,
Why do we think this means early test groups weren’t included in the study? It just sounds like it took eight years to get the large sample size he wanted.
I think that it means that early test groups weren’t included because that is the easiest way to produce the results we’re seeing.
Why eight years? Did he decide that eight years ago, before beginning to collect data? Or did he run tests until he got the data he wanted, then check how long it had taken? I am reasonably certain that if he got p-value significant results 4 years into this study, he would have stopped the tests and published a paper, saying “I took 4 years to make sure the sample size was large enough.”
Looking at the actually study it seems to include the results of quite a few different experiments. If he either excluded early tests or continued testing until he got the results he wanted that would obviously make the study useless but we can’t just assume that is what happened. Yes it is likely relative to the likelihood of psi, but since finding out what happened isn’t that hard it seems silly just to assume.
In particular, there seems to be a lot of disagreement about the metaethics sequence, and to a lesser extent about timeless physics.
That was… surprisingly surprising. Thank you.
For reasons like those you listed, and also out of some unverbalized frustration, in the last week I’ve been thinking pretty seriously whether I should leave LW and start hanging out somewhere else online. I’m not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.
What other places on the Net are there for someone like me? Hacker News and Reddit look like dumbed-down versions of LW, so let’s not talk about those. I solved a good bit of Project Euler once, the place is tremendously enjoyable but quite narrow-focused. The n-Category Cafe is, sadly, coming to a halt. Math Overflow looks wonderful and this question by Scott Aaronson nearly convinced me to drop everything and move there permanently. The Polymath blog is another fascinating place that is so high above LW that I feel completely underqualified to join. Unfortunately, none of these are really conducive to posting new results, and moving into academia IRL is not something I’d like to do (I’ve been there, thanks).
Any other links? Any advice? And please, please, nobody take this comment as a denigration of LW or a foot-stomping threat. I love you all.
My new blog “Azimuth” may not be mathy enough for you, but if you like the n-Category Cafe, it’s possible you may like this one too. It’s more focused on technology, environmental issues, and the future. Someday soon you’ll see an interview with Eliezer! And at some point we’ll probably get into decision theory as applied to real-world problems. We haven’t yet.
(I don’t think the n-Category Cafe is “coming to a halt”, just slowing down—my change in interests means I’m posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)
Wow.
Hello.
I didn’t expect that. It feels like summoning Gauss, or something.
Thank you a lot for twf!
Link to John Baez’s blog
It’s new? I’m already following it for some time. Can’t remember how I came across it in the first place though...very cool but over my head, thanks.
The markup syntax here is a bit unusual and annoying—click the “Help” button at the bottom right of the edit window to get guidance on how to include hyperlinks. Unlike every other hyperlinking system, the text goes first and the URL second!
Make a top level post about the kind of thing you want to talk about. It doesn’t have to be an essay, it could just be a question (“Ask Less Wrong”) or a suggested topic of conversation.
I love your posts, so having seen this comment I’m going to try to write up my nascent sequence on memetic colds, aka sucker shoots, just for you. (And everyone.)
Thanks!
Same for me. My interests are more similar to your interests than to classic LW themes. There are probably many others here in the same situation. But I hope that the list of classic LW themes is not set in stone. I think people like us should try to broaden the spectrum of LW. If this attempt fails, please send me the address of the new place where you hang out online. :) But I am optimistic.
“Leaving” LW is rather strong. Would that mean not posting? Not reading the posts, or the comments? Or just reading at a low enough frequency that you decouple your sense of identity from LW?
I’ve been trying to decide how best to pump new life into The Octagon section of the webcomic collective forum Koala Wallop. The Octagon started off when Dresden Codak was there, and became the place for intellectual discussion and debate. The density of math and computer theoretic enthusiasts is an order of magnitude lower than here or the other places you mentioned, and those who know such stuff well are LW lurkers or posters too. There was an overkill of politics on The Octagon, the levels of expertise on subjects are all over the spectrum, and it’s been slowing down for a while, but I think a good push will revive it. The main thing is that it lives inside of a larger forum, which is a silly, fun sort of community. The subforum simply has a life of it’s own.
Not that I claim any ownership over it, but:
I’m going to try to more clearly brand it as “A friendly place to analytically discuss fantastic, strange or bizarre ideas.”
Of course, MathOverflow isn’t really a place for discussion...
At least as far as math is concerned, people not in academia can publish papers. As for the Polymath blog, I’d actually estimate that you are at about the level of most Polymath contributors, although most of the impressive work there seems to be done by a small fraction of the people there.
About Polymath: thanks! (blushes)
I have no fetish for publishing papers or having an impressive CV or whatever. The important things, for me, are these: I want to have meaningful discussions about my areas of interest, and I want my results to be useful to somebody. I have received more than a fair share of “thank yous” here on LW for clearing up mathy stuff, but it feels like I could be more useful… somewhere.
I found this amusing because by those standards, cults are everywhere. For example, I run a professional Magic: The Gathering team and am pretty sure I’m not a cult leader. Although that does sound kind of neat. Observe:
Eileen Barker:
When events are close we spend a lot of time socially seperate from others so as to develop and protect our research. On occasion ‘Magic colonies’ form for a few weeks. It’s not substantially less isolating than what SIAI dos. Check.
I have imparted huge amounts of belief about a large subset of our world, albeit a smaller one than Eliezer is working on. Partial Check.
I make reasonably import, on the level of the Cryonics decision if Cryonics isn’t worthwhile, decisions for my teammates and do what I need to do to make sure they follow them far more than they would without me. Check.
We identify other teams as ‘them’ reasonably often, and certain other groups are certainly viewed as the enemy. Check.
Nope, even fainter argument than Eliezer.
Again, yes, obviously.
Shirley Harrison:
I claim a special mission that I am uniquely qualified to fufill. Not as important of one, but still. Check.
My writings count at least as much as the sequences. Check.
Not intentionally, but often new recruits have little idea what to expect. Check plus.
Totalitarian rules structure, and those who game too much often alienate friends and family. I’ve seen it many times, and far less of a cheat than saying that you’ll be alienated from them when they are all dead and you’re not because you got frozen. Check.
I make people believe what I want with the exact same techniques we use here. If anything, I’m willing to use slightly darker arts. Check.
We make the lower level people do the grunt work, sure. Check.
Based on some of the deals I’ve made, one looking to demonize could make a weak claim. Check plus.
Exclusivity. In spades. Check.
I’d also note that the exercise left to the reader is much harder, because the other checklists are far harder to fudge.
I have to disagree that this “smugness” even remotely reaches the level that is characteristic of a cult.
As someone who has frequently expressed disagreement with the “doctrine” here, I have occasionally encountered both reactions that you mention. But those sporadic reactions are not much of a barrier to criticism—any critic who persists here will eventually be engaged intelligently and respectfully, assuming that the critic tries to achieve a modicum of respect and intelligence on his own part. Furthermore, if the critic really engages with what his interlocutors here are saying, he will receive enough upvotes to more than repair the initial damage to his karma
Yes. LessWrong is not in fact hidebound by groupthink. I have lots of disagreement with the standard LessWrong belief cluster, but I get upvotes if I bother to write well, explain my objections clearly and show with my reference links that I have some understanding of what I’m objecting to. So the moderation system—“vote up things you want more of”—works really well, and I like the comments here.
This has also helped me control my unfortunate case of asshole personality disorder elsewhere I see someone being wrong on the Internet. It’s amazing what you can get away with if you show your references.
This would be easier to parse if you quoted the individual criteria you are evaluating right before the evaluation, eg:
I’ve not seen this happening—examples?
I think it would be more accurate to say that anyone who after reading the sequences still disagrees, but is unable to explain where they believe the sequences have gone wrong, is not worth arguing with.
With this qualification, it no longer seems like evidence of being cult.
That’s the pejorative usage. There is also:
“Cult also commonly refers to highly devoted groups, as in:
Cult, a cohesive group of people devoted to beliefs or practices that the surrounding culture or society considers to be outside the mainstream
Cult of personality, a political leader and his following, voluntary or otherwise
Destructive cult, a group which exploits and destroys its members or even non-members
Suicide cult, a group which practices mass self-destruction, as occurred at Jonestown
Political cult, a political group which shows cult-like features”
http://en.wikipedia.org/wiki/Cults_of_personality
http://en.wikipedia.org/wiki/Cult_following
http://en.wikipedia.org/wiki/Cult_%28religious_practice%29
What are the scenarios where someone unfamiliar with this website would hear about Roko’s deleted post?
I suppose it could be written about dramatically (because it was dramatic!) but I don’t think anyone is going to publish such an account. It was bad from the perspective of most LWers—a heuristic against censorship is a good heuristic.
This whole thing is ultimately a meta discussion about moderation policy. Why should this discussion about banned topics be that much interesting than a post on Hacker News that is marked as dead? Hacker News generally doesn’t allow discussion of why stories were marked dead. The moderators are anonymous and have unquestioned authority.
If Less Wrong had a mark as dead function (on HN unregistered users don’t see dead stories, but registered users can opt-in to see them), I suspect Eliezer would have killed Roko’s post instead of deleting it to avoid the concerns of censorship, but no one has written that LW feature yet.
As a solid example of what a not-PR disaster it was, I doubt that anyone at the Singularity Summit that isn’t a regular Less Wrong reader (the majority of attendees) has heard that Eliezer deleted a post. It’s just not the kind of thing that actually makes a PR disaster… honestly if this was a PR issue it might be a net positive because it would lead some people to hear of LW that otherwise would never have heard of Less Wrong. Please don’t take that as a reason to make this a PR issue.
Eliezer succeeded in the sense that it is very unlikely that people in the future on Less Wrong are going to make stupid emotionally abhorrent posts about weird decision theory torture scenarios. He failed in that he could have handled the situation better.
If anyone would like to continue talking about Less Wrong moderation policy, the place to talk about it is the Meta Thread (though you’d probably want to make a new one (good for +[20,50] karma!) instead of discussing it in an out of season thread)
As someone who had over 20 points of karma obliterated for reasons I don’t fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial. I still don’t really know what happened. Of course I have vague theories , and I’ve received a terse and unhelpful response from EY (a link to a horror story about a “riddle” which kills—a good story which I simply don’t accept as a useful parable of reality), but nothing clear. I do not think that I have anything of outstanding value to offer this community, but I suspect that Roko, little I, and the half-dozen others like us which probably exist, are a net loss to the community if driven away, especially if not being seen as cultlike is valuable.
I believe you lost 20 karma because you had 2 net downvotes on your post at the time it was deleted (and those votes still affect your total karma, although the post cannot be further upvoted or downvoted). The loss of karma did not result directly from the deletion of the post, except for the fact that the deletion froze the post’s karma at the level it was at when it was deleted.
I only looked briefly at your post, don’t remember very much about it, and am only one reader here, but from what I recall, your post did not seem so obviously good that it would have recovered from those two downvotes. Indeed, my impression is that it’s more probable that if the post had been left up longer, it would have been even more severely downvoted than it was at the time of deletion, as is the case with the many people’s first posts. I’m not very confident about that, but there certainly would have been that risk.
All that being said, I can understand if you would rather have taken the risk of an even greater hit to karma if it would have meant that people were able to read and comment on your post. I can also sympathize with your desire for a clearer moderation policy, although unless EY chose to participate in the discussion, I don’t think clearer standards would emerge, because it’s ultimately EY’s call whether to delete a post or comment. (I think there are a couple others with moderation powers, but it’s my understanding that they would not independently delete a non-troll/spam post).
I think it was 30 karma points (3 net downvotes), though I’m not sure. And I believe that it is entirely possible that some of those downvotes (more than 3, because I had at least 3 upvotes) were for alleged danger, not for lack of quality. Most importantly, if the post hadn’t been deleted, I could have read the comments which presumably would have given me some indication of the reason for those downvotes.
Looking at my own posts I see a lot of this problem; that is, the problem of addressing only far too small an audience. Thank you for pointing it out.