I can’t speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.
What exactly are Eliezer’s qualifications supposed to be?
I’m definitely not trying to attack anyone (and you’re right my comment could be read that way). But I’m also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He’s probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn’t expect any one person to solve a Hard problem. Insight isn’t that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.
No one looks at open problems in other fields this way.
No one looks at open problems in other fields this way.
Yes, the situation isn’t normal or good. But this isn’t a balanced comparison, since we don’t currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.
I may have an incorrect impression, but SIAI or at least Eliezer’s department seems to have a self-image comparable to the Manhattan project rather than early pioneers of a scientific field.
I may have an incorrect impression, but SIAI or at least Eliezer’s department seems to have a self-image comparable to the Manhattan project
Eliezer has said: “I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me.” Your call as to whether you believe that. (The rest of that post, and some of his other posts in that discussion, address some points similar to those that you raised.)
That said, “self-image comparable to the Manhattan project” is an unusually generous ascription of humility to SIAI and Eliezer. :P
...producing a provable mathematical formalization of friendliness [...] And Eliezer is obviously a really smart guy. He’s probably even the most likely person to solve it.
I haven’t seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.
Which statement are you talking about? Saying someone is the most likely person to do something is not the same as saying they are likely to do it. You haven’t said anything in this comment than I disagree with so I don’t understand what we’re disputing.
To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.
How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky’s publications and all of the LW sequences? You could argue that he and other people don’t have the smarts to grasp Yudkowsky’s arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?
The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.
The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn’t have to be free of people who disagree with it to be influential, and it doesn’t even have to be correct.
How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?
Level up first. I can’t evaluate physics research, so I just accept that I can’t tell which of it is correct; I don’t try to figure it out from the politics of physicists arguing with each other, because that doesn’t work.
Level up first. I can’t evaluate physics research, so I just accept that I can’t tell which of it is correct; I don’t try to figure it out from the politics of physicists arguing with each other, because that doesn’t work.
But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.
Would you tell a politician to go and read the sequences and if, after reading the publications, they don’t see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?
Writing is influential when many people are influenced by it.
You talked about Yudkowsky’s influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don’t think they influenced the right people.
Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu’s comment is born of motivated cognition to a greater extent than your own comments.
Moreover, I believe that even when such statements are true, one should avoid making them when possible as they’re easily construed as personal attacks which tend to spawn an emotional reaction in one’s conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.
Moreover, I believe that even when such statements are true, one should avoid making them when possible
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you’re going wrong, you won’t improve.
as they’re easily construed as personal attacks which tend to spawn an emotional reaction in one’s conversation partners
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
pushing them into an arguments as soldiers mode which is detrimental to rational discourse.
On this blog, any person should definitely be resisting this push.
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you’re going wrong, you won’t improve.
I did not say that one should avoid telling people when and where they’re going wrong. I was objecting to the practice of questioning people’s motivations. For the most part I don’t think that questioning somebody’s motivations is helpful to him or her.
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn’t mean that the commentators are always above this sort of thing.
I agree with you insofar as I think that one work to interpret comments charitably.
On this blog, any person should definitely be resisting this push.
I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
I was objecting to the practice of questioning people’s motivations.
Not questioning their motivations; you objected to the practice of pointing out motivated cognition:
I find it unlikely that you have enough information to make a confident judgment that XiXiDu’s comment is born of motivated cognition … Moreover, I believe that even when such statements are true, one should avoid making them when possible
Pointing out that someone hasn’t thought through the issue because they are motivated not to—this is not an attack on their motivations; it is an attack on their not having thought through the issue. Allowing people to keep their motivated cognitions out of respect for their motivations is wrong, because it doesn’t let them know that they have something wrong, and they miss a chance to improve it.
Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise.
To paraphrase steven, if you’re interested in winning disputes you should dismiss personal attacks, but if you’re interested in the truth you should dig through their personal attacks for any possible actual arguments. Whether or not it’s a personal attack, you ought to construe it as if it is not, in order to maximise your chances of finding truth.
this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
Agreed. I think the first two parts of our comments address whether one should exert such a push. I think you’re right, and this whole third part of our discussion is irrelevant.
It’s quite possible to be inaccurate about other people’s motivations, and if you are, then they will have another reason to dismiss your argument.
How do you identify motivated cognition in other people?
Not thinking something through could be habitual sloppiness, repeating what one has heard many times, or not thinking that a question is worthy of much mental energy rather than a strong desire for a particular conclusion. (Not intended as a complete list.)
Making a highly specific deduction from an absence rather than a presence strikes me as especially likely to go wrong.
How do you identify motivated cognition in other people?
Some of the same ways I see it in myself. Specifically, when dealing with others:
Opposed to easy (especially quick or instant) tests: strong evidence of motivated stopping.
All for difficult (especially currently-impossible) tests: moderate evidence of motivated continuing.
Waiting on results of specific test to reconsider or take a position: moderate evidence of motivated continuing.
Seemingly-obvious third alternative: very strong evidence of motivated stopping. Caveat! this one is problematic. It is very possible to miss third alternatives.
Opposed to plausible third alternatives: weak evidence of motivated stopping—strong evidence with a caveat and split, as “arguments as soldiers” can also produce this effect. Mild caveat on plausibility being somewhat subjective.
In the case of XiXiDu’s comment, focusing on Ben Goertzel’s rejection is an example of waiting on results from a specific test. That is enough evidence to locate the motivated continuing hypothesis¹ - ie, that XiXiDu does not want to accept the current best-or-accepted-by-the-community answer.
The questions XiXiDu posed afterwards seem to have obvious alternative answers, which suggests motivated stopping. He seems to be stopping on “Something’s fishy about Eliezer’s setup”.
¹: As well as “Goertzel is significantly ahead of AI development curve”, “AGI research and development is a field with rigid formal rules on what does and doesn’t convince people”—the first is easily tested by looking at Ben’s other views, the second is refuted by many researchers in that field
Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer’s stance.
For what it’s worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He’s also responsible for coining “Seed AI”.
Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer’s stance.
Indeed, I was just trying to figure out how someone with money or power who wants to know what is the right thing to do but who does not have the smarts should do. Someone like a politician or billionaire who would either like to support some AGI research or the SIAI. How are they going to decide what to do if all AGI experts tell them that there is no risk from AGI research and that the SIAI is a cult when at the same time the SIAI tells them the AGI experts are intellectual impotent and the SIAI is the only hope for humanity to survive the AI revolution. What should someone who does not have the expertise or smarts to estimate those claims, but who nevertheless has to decide how to use his power, should do? I believe this is not an unrealistic scenario as many rich or powerful people want to do the right thing, yet do not have the smarts to see why they should trust Yudkowsky instead of hundreds of experts.
For what it’s worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He’s also responsible for coining “Seed AI”.
Interesting, when did he come up with the concept of “Seed AI”. Because it is mentioned in Karl Schroeder’s Ventus (Tor Books, 2000.) ISBN 978-0312871970.
Didn’t find the phrase “Seed AI” there. One plot element is a “resurrection seed”, which is created by an existing, mature evil AI to grow itself back together in case it’s main manifestation is destroyed. A Seed AI is a different concept, it’s something the pre-AI engineers put together that grows into a superhuman AI by rewriting itself more and more powerful. A Seed AI is specifically a method to get to AGI from not having one, not just an AI that grows from a seed-like thing. I don’t remember recursive self-improvement being mentioned with the seed in Ventus.
A precursor concept where the initial AI bootstraps itself by merely learning things, not necessarily by rewriting it’s own architecture, goes all the way back to Alan Turing’s 1950 paper on machine intelligence.
Look at it this way. Once long ago two kinds of work converged. We’d figured out how to make machines that could make more machines. And we’d figured out how to get machines to… not exactly think, but do something very much like it. So one day some people built a machine which knew how to build a machine smarter than itself. That built another, and that another, and soon they were building stuff the men who made the first machine didn’t even recognize.
[...]
And, some of the mechal things kept developing, with tremendous speed, and became more subtle than life. Smarter than humans. Conscious of more. And, sometimes, more ambitious. We had little choice but to label them gods after we saw what they could do—namely, anything.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
Yes, but I believe it is a bit weird for a Wikipedia article to state that someone is the originator of the Seed AI theory when he just coined the term. I wasn’t disputing anything, just trying to figure out if it is actually the case that Yudkowsky came up with the concept in the first place.
They did not command the wealth of nations, these researchers. Although their grants amounted to millions of Euros, they could never have funded a deep-space mission on their own, nor could they have built the giant machineries they conceived of. In order to achieve their dream, they built their prototypes only in computer simulation, and paid to have a commercial power satellite boost the Wind seeds to a fraction of light speed. [...] no one expected the Winds to bloom and grow the way they ultimately did.
It is further explained that the Winds were designed to evolve on their own so they are not mere puppets of human intentions but possess their own intrinsic architecture.
In other places in the book it is explained how humans did not create their AI Gods but that they evolved themselves from seeds designed by humans.
Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer’s stance.
I don’t think the failure of someone to be convinced of some position is ever strong evidence against that position. But this argument here is genuinely terrible. I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?
If someone is unable to examine the available evidence and come to a sane conclusion on a particular topic, this makes it less likely that they are able to examine the available evidence and to sane conclusions on other topics.
I don’t take Goertzel seriously for the same reason I don’t take young earth creationists seriously. It’s not that I disagree with him, it’s that his beliefs have almost no connection to reality.
(If it makes you feel better, I have read some of Goertzel’s writing on AGI, and it’s stuffed full of magical thinking.)
And computer scientists haven’t understood the self – because it isn’t about computer science. It’s about the emergent dynamics that happen when you put a whole bunch of general and specialized pattern recognition agents together – a bunch of agents created in a way that they can really cooperate – and when you include in the mix agents oriented toward recognizing patterns in the society as a whole.
and
The goal systems of humans are pretty unpredictable, but a software mind like Novamente is different – the goal system is better-defined. So one reasonable approach is to make the first Novamente a kind of Oracle. Give it a goal system with one top-level goal: To answer peoples’ questions, in a way that’s designed to give them maximum understanding.
It’s possible that with sufficient real-world intelligence tends to come a sense of connectedness with the universe that militates against squashing other sentiences.
I mostly disagree with Ben, but I don’t think judging him based on that paper is fair. It’s pretty bad, but it was also written in 1996. Fourteen years is a lot of time to improve as a thinker.
I had that thought too, and I was thinking of retracting or amending my comment to that effect, but looking at some of his later publications in the same journal(?) suggests that he hasn’t leveled up much since then.
“The Futility Of Emergence” really annoys me. It’s a perfectly useful word. It’s a statement about the map rather than about the territory, but it’s a useful one. Whereas magic means “unknowable unknowns”, emergent means “known unknowns”—the stuff that we know follows, we just don’t know how.
e.g. Chemistry is an emergent property of the Schrodinger equation, but calculating anything useful from that is barely in our grasp. So we just go with the abstraction we know, and they’re separate sciences. But we do know we have that work to do.
Just linking to that essay every time someone you’re disagreeing with says “emergent” is difficult to distinguish from applause lights.
Saying the word “emergent” adds nothing. You’re right that it’s not as bad as calling something magic and declaring that it’s inherently unknowable, but it also offers zero explanatory power. To reword your example:
Chemistry is a property of the Schrodinger equation, but calculating anything useful from that is barely in our grasp. So we just go with the abstraction we know, and they’re separate sciences. But we do know we have that work to do.
There is absolutely no difference in meaning when you take the word “emergent” out. That’s why it isn’t useful, which Eliezer was pointing out.
Nitpick: I don’t think that is exactly what EY was pointing out. Take a look at the comments and the general response of “Huh? Who makes that mistake?” It seems EY was complaining about the tendency of AGI researchers to use “emergence” as if it were an explanation, not ordinary use of the word that doesn’t pretend it is one but just, say, points out that the behavior is surprising given what it’s composed of, or that your current methods aren’t powerful enough to predict the consequences. He didn’t seem to have realized that particular mistake was mostly localized to AGI people.
It seems more likely that when the cited people said “intelligence is an emergent phenomenon”, they were misunderstood as proposing that as a satisfactory explanation of the phenomenon.
There is absolutely no difference in meaning when you take the word “emergent” out. That’s why it isn’t useful, which Eliezer was pointing out.
I’m not entirely sure this is correct. I wouldn’t call the trajectories of planets and galaxies “properties” of Relativity, but I would call it emergent behavior due to Relativity. It’s a stylistic and grammatical choice, like when to use “which” and when to use “that.” They may seem the same to the uninitiated, but there’s a difference and the initiated can tell when you’re doing it wrong.
So, I agree with David Gerard that trying to eradicate the use of the word is misplaced. It’d be like saying “the word ‘which’ is obsolete, we’re only going to use ‘that’ and look down on anyone still using ‘which’.” You lose far more by such a policy than you gain.
I don’t take Goertzel seriously for the same reason I don’t take young earth creationists seriously. It’s not that I disagree with him, it’s that his beliefs have almost no connection to reality.
From what I’ve seen, the people who comment here who have read Broderick’s book have come away, if not convinced psy describes some real physical phenomena, convinced that the case isn’t at all open and shut the way young earth creationism is. When an issue is such that smart, sane people can disagree then you have to actually resolve the object level disagreement before you can use someone’s beliefs on the issue in a general argument about their rationality. You can’t just assume it as you do here.
Goertzel’s paper on the subject is about extending the de Broglie Bohm pilot wave theory in a way that accounts for psi while being totally consistent with all known physics. Maybe it is nonsense, I haven’t read it. But you can’t assume it is.
Maybe it is nonsense, I haven’t read it. But you can’t assume it is.
I disagree. I do not need to (and should not) discard my priors when evaluating claims.
It would be an error in reasoning on my part if I did not account for the low prior (to reading it) probability of a psyonics theory being sane when evaluating the proponents other claims. For emphasis: not lowering my confidence in Goertzel’s other beliefs because he is a proponent of psi without me having read his paper would be an outright mistake.
I also note that you defending Goertzel on the psi point is evidence against Goertzel’s beliefs regarding AI. Extremely weak evidence.
Could you unpack your reasoning? Do you mean that Jack defending Goertzel on psi discredits defense of Goertzel on AI because it shows such defense to be less correlated to the validity of the opinion than previously thought? Or did you drop a negation or something and mean the opposite of what you wrote, because Jack defending Goertzel on psi is very slight evidence of Goertzel’s opinion on psi not being as crazy as you previously thought?
I don’t think the failure of someone to be convinced of some position is ever strong evidence against that position.
Ever is a strong word. If a competent expert in a field who has a known tendency to err slightly on the side of too much openness to the cutting edge fails to be convinced by a new finding within his field that says an awful lot.
I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?
That is simply not the form of the argument you quote. “Ben Goertzel believes in psychic phenomenon” can not be represented as “I disagree with person x ”.
That is simply not the form of the argument you quote. “Ben Goertzel believes in psychic phenomenon” can not be represented as “I disagree with person x ”.
I’m being generous and giving the original comment credit for an implicit premise. As stated the argument is “Person x believes y, therefore person x is wrong about z.” this is so obviously wrong it makes my head hurt. WrongBot’s point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn’t provide any evidence to that effect it reduces to ‘I disagree with Goertzel about psy’.
I generally don’t try to provide evidence for every single thing I say, and I am especially lax about things that I consider to be incredibly obvious.
But I’m annoyed enough to lay out a very brief summary of why belief in PSI is ludicrous:
It isn’t permitted by known physics.
There are no suggested mechanisms (so far as I’m aware) for PSI which do not contradict proven physical laws.
The most credible studies which claim to demonstrate PSI have tiny effect sizes, and those haven’t been replicated with larger sample sizes.
Publication bias.
PSI researchers often seem to possess motivated cognition.
We’ve analyzed the functioning of individual neurons pretty closely. If there are quantum microtubules or other pseudoscientific nonsense in them, they don’t seem to affect how those individual neurons behave.
No one has to give evidence for everything they say but when things that you thought were obviously wrong begin to get defended by physics-literate reductionist materialists that seems like a good time to lower your confidence.
There are no suggested mechanisms (so far as I’m aware) for PSI which do not contradict proven physical laws.
Well to begin with, Goertzel’s paper claims to be such a mechanism. Have you read it? I don’t know if it works or not. Seems unwise to assume it doesn’t though.
Publication bias, motivated cognition and effect size are all concerns and were my previous explanation. I found this meta-analysis upset that view for me.
If PSI exploits weird physics in a complicated manner and produces such tiny effects, where the hell did the mechanism come from? PSI would obviously be a very useful adaptation, so why don’t we see it in other species? Why aren’t the effects stronger, since there’s such a strong evolutionary pressure in favor of them?
Goertzel’s paper also includes psychokinesis as a PSI phenomenon supported by strong evidence. I would love to see the study he’s talking about for that one. Or a video.
If PSI exploits weird physics in a complicated manner and produces such tiny effects, where the hell did the mechanism come from? PSI would obviously be a very useful adaptation, so why don’t we see it in other species? Why aren’t the effects stronger, since there’s such a strong evolutionary pressure in favor of them?
All of this is also discussed in Outside the Gates. I can try to dig up what he said this weekend.
Goertzel’s paper also includes psychokinesis as a PSI phenomenon supported by strong evidence. I would love to see the study he’s talking about for that one. Or a video.
The experiments aren’t macroscopic. The results involve statistical deviations from expected normal distributions of say, white noise generators when participants try to will the results in different directions. I don’t think these results are nearly as compelling as other things, see Jahn and Dunne 2005 for example. They had some methodological issues and the one attempt that was made at replication, while positive, wasn’t significant at anywhere near the level of the original.
If you’re actually interested you should consider checking out the book. It is a quick, inexpensive read. Put it this way: I’m not some troll who showed up here to argue about parapsychology. Six months ago I was arguing your position here with someone else and they convinced me to check out the book. I then updated significantly in the direction favoring psi (not enough to say it exists more likely than not, though). Everything you’ve said is exactly what I was saying before. It turns out that there are sound responses to a lot of the obvious objections, making the issue not nearly as clear cut as I thought.
As stated the argument is “Person x believes y, therefore person x is wrong about z.” this is so obviously wrong it makes my head hurt.
It would be wrong if it were a logical deduction instead of an inference. That is, if WrongBot actually wrote ‘therefore’ or otherwise signaled absolute deductive certainty then he would be mistaken. As is he presents it as evidence, which it in fact is.
WrongBot’s point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn’t provide any evidence to that effect it reduces to ‘I disagree with Goertzel about psy’.
There is a clear implied premise ‘psychic phenomenon are well known to be bullshit’. Not all baseline premises must be supported in an argument. Instead, the argument should be considered stronger or weaker depending on how reliable the premises are. I don’t think WrongBot loses too much credibility in this case by dismissing psychic phenomenon.
It would be wrong if it were a logical deduction instead of an inference. That is, if WrongBot actually wrote ‘therefore’ or otherwise signaled absolute deductive certainty then he would be mistaken. As is he presents it as evidence, which it in fact is.
It isn’t even evidence until you include a premise about the likelihood of y, which we agree is the implied premise.
There is a clear implied premise ‘psychic phenomenon are well known to be bullshit’. Not all baseline premises must be supported in an argument. Instead, the argument should be considered stronger or weaker depending on how reliable the premises are. I don’t think WrongBot loses too much credibility in this case by dismissing psychic phenomenon.
I think I’m just restating the exchange I had with komponisto on this point. Goertzel’s position isn’t that of someone who is doesn’t know any physics or Enlightenment-style rationality. It is clearly a contrarian position which should be treated rather differently since we can assume he is familiar with the reasons why psychic phenomena are ‘well known to be bullshit’. It is a fully generalizable tactic which can be used against all and any contrarian thinkers. Try “Robin Hanson thinks we should cut health care spending 50%, therefore he is less likely to be right about fertility rate.”
It isn’t even evidence until you include a premise about the likelihood of y, which we agree is the implied premise.
This is obviously going to be the case when trying to convince an individual of something. The beliefs (crackpot or otherwise) of the target audience are always going to be relevant to persuasively. As a comment directed in part to the wider lesswrong audience the assumed premises will be different.
Try “Robin Hanson thinks we should cut health care spending 50%, therefore he is less likely to be right about fertility rate.”
If I were a reader who thought Robin’s position on health care was as implausible as belief in magic and thought that making claims about the fertility was similar to AI strategy then I would take this seriously. As it stands the analogy is completely irrelevant.
Any argument of this nature needs to include some explanation of why someone’s ability to think about y is linked to their ability to think about z. But even with that (which wasn’t included in the comment) you can only conclude that y and z imply each other. You can’t just conclude z.
In other words, you have to show Goertzel is wrong about psychic phenomenon before you can show that his belief in it is indicative of reasoning flaws elsewhere.
I got the impression from Damien Broderick’s book that a lot of PSI researchers do understand physics and aren’t postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use. There’s a story that Einstein was interested in PSI research, but declared it nonsense when the claimed results showed PSI effects that weren’t subject to the inverse square law, so this isn’t a new idea.
Damien Broderick’s attitude in his book is basically that there’s a bunch of anomalous observations and neither a satisfactory explanation or, in his opinion, a refutation for them exists. Goertzel’s attitude is to come up with a highly speculative physical theory that could explain that kind of phenomena, and which would take a bit more than “would need extra particles” to show as nonsense.
“Not understanding basic physics” doesn’t really seem to cut it in either case. “It’s been looked into by lots of people, a few of them very smart, for 80 years, and nothing conclusive has come out of it, so most likely there isn’t anything in it, and if you still want to have a go, you better start with something the smart people in 1970s didn’t have” is basically the one I’ve got.
I’m not holding my breath over the recent Bem results, since he seems to be doing pretty much the same stuff that was done in the 70s and always ended up failing one way or the other, but I’m still waiting for someone more physics-literate to have a go at Goertzel’s pilot wave paper.
I got the impression from Damien Broderick’s book that a lot of PSI researchers do understand physics and aren’t postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use...
“Not understanding basic physics” doesn’t really seem to cut it in either case
“Not understanding basic physics” sounds like a harsh quasi-social criticism, like “failing at high-school material”. But that’s not exactly what’s meant here. Rather, what’s meant is more like “not being aware of how strong the evidence against psi from 20th-century physics research is”.
The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H [EDIT: technically, this is not necessarily true, but it usually is in practice, and becomes more likely as P(H|M) approaches 0]. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.
The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.
Evidence distinguishes between not for individual models. There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.
Evidence distinguishes between not for individual models.
By the Bayesian definition of evidence, “evidence for” a hypothesis (including a “model”, which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.
There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.
Carroll claims that current data implies the probability of such models being correct is near zero. So I’d like to invoke Aumann here and ask what your explanation for the disagreement is. Where is Carroll’s (and others’) mistake?
including a “model”, which is just a name for a complex conjunction of hypotheses
If models are just complex conjunctions of hypotheses then the evidence that confirms models will often confirm some parts of the model more than others. Thus the evidence does little to distinguish the model from a different model which incorporates slightly different hypotheses.
By the Bayesian definition of evidence, “evidence for” a hypothesis (including a “model”, which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.
Yes, but this depends on what other hypotheses are considered in the “false” case.
One typically works with some limited ensemble of possible hypotheses
Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional “something not on this list” hypothesis that covers everything else
You appear to be thinking in terms of ad-hoc statistical techniques (“computable”, “one typically works...”), rather than fundamental laws governing belief. But the latter is what we’re interested in in this context: we want to know what’s true and how to think, not what we can publish and how to write it up.
Let me put it this way: excluding a hypothesis from the model space is merely the special case of setting its prior to zero. Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses, even if no hypothesis goes from possible to not or vice-versa.
As this is prior dependent, there is no objective measure of whether a hypothesis is supported or rejected by evidence.
(This is obviously true when we look at P(H_i|e). It’s a bit less so when we look at P(e|H) vs P(e|~H). This seems objective. It is objective in the case that H and ~H are atomic hypotheses with a well-defined rule for getting P(e|~H). But if ~H is an or of “all the other theories”, than P(e|~H) is dependent on the prior probabilities for each of the H_i that are the subcomponents of ~H. It’s also an utterly useless by itself for judging H. We want to know P(H|e) for that. (P(e|H) is of course why we want P(H), so we can make useful predictions.)
It is true that in the long run much evidence will eventually dominate any prior. But summarizing this as “log odds”, for instance is only useful for talking about comparing two specific hypotheses, not “this hypothesis” and “everything else”.
But I still have objections to most you say.
You’ve given an essentially operational definition of “evidence for” in terms of operations that can’t be done.
Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional “something not on this list” hypothesis that covers everything else.
Yes. The standard way to express that is that you can’t actually work with P(Hypothesis), only P(Hypothesis | Model Space).
You can then, of course expand your model spaces, if you find your model space is inadequate.
You appear to be thinking in terms of ad-hoc statistical techniques (“computable”,
“Computable” is hardly ad-hoc. It’s a fundamental restriction on how it is possible to reason.
we want to know what’s true and how to think,
If you want to know how to think, you had better pick a method that’s actually possible.
This really is just another facet of “all Bayesian probabilities are conditional.”
Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses
Yes, of course. The point is that if you’re using probability theory to actually reason, and not merely to set up a toy statistical model such as might appear in a scientific paper, you will in fact already be “considering” all possible hypotheses, not merely a small important-looking subset. Now it’s true that what you won’t be doing is enumerating every possible hypothesis on the most fine-grained level of description, and then computing the information-theoretic complexity of each one to determine its prior—since, as you point out, that’s computationally intractable. Instead, you’ll take your important-looking subset just as you would in the science paper, let’s say H1, H2, and H3, but then add to that another hypothesis H4, which represents the whole rest of hypothesis-space, or in other words “something I didn’t think of”/”my paradigm is wrong”/etc. And you have to assign a nonzero probability to H4.
Yes. The standard way to express that is that you can’t actually work with P(Hypothesis), only P(Hypothesis | Model Space).
No, see above. In science papers, “paradigm shifts” happen, and you “change your model space”. Not in abstract Bayesianism. In abstract Bayesianism, low-probability events happen, and you update accordingly. The result will look similar to “changing your model space”, because what happens is that when H4 turns out to be true (i.e. its probability is raised to something high), you then to start to carve up the H4 region of hypothesis-space more finely and incorporate these “new” sub-hypotheses into your “important-looking subset”.
To return to the issue at hand in this thread, here’s what’s going on as I see it: physicists, acting as Bayesians, have assigned very low probablity to psi being true given QFT, and they have assigned a very high probability to QFT. In so doing, they’ve already considered the possibility that psi may be consistent with QFT, and judged this possibility to be of near-negligible probability. That was done in the first step, where they said “P(psi|QFT) is small”. It doesn’t do to reply “well, their paradigm may be wrong”; yes, it may, but if you think the probability of that is higher than they do, then you have to confront their analysis. Sean Carroll’s post is a defense of the proposition that “P(psi|QFT) is small”; Jack’s comment is an assertion that “psi&QFT may be true”, which sounds like an assertion that “P(psi|QFT) is higher than Sean Carroll thinks it is”—in which case Jack would need to account somehow for Carroll being mistaken in his analysis.
“P(psi|QFT) is higher than Sean Carroll thinks it is”—in which case Jack would need to account somehow for Carroll being mistaken in his analysis.
This is basically my position. ETA: I may assign a high probability to “not all of the hypotheses that make up QFT are true” a position I believe I can hold while not disputing the experimental evidence supporting QFT (though such evidence does decrease the probability of any part of QFT being wrong).
I don’t think Carroll’s analysis comes close to showing that P(psi|QFT) is 1 in a billion. He took one case, a psychokinesis claim that no one in parapsychology endorses and showed how it was impossible given one interpretation of what the claim might mean. We can’t look at his analysis and take it as convincing evidence that the claims of parapsychologists aren’t consistent with QFT since Carroll doesn’t once mention any of the claims made by parapsychologists!
Now there are some studies purporting to show psychokinesis (though they are less convincing than the precognition studies and actually might just be a kind of precognition). Even in these cases no one in parapsychology thinks the perturbations are the result of EM or gravitational fields; Carroll pointing out that they can’t shouldn’t result in us updating on anything.
I actually think a physicist might be able to write a convincing case for why the claims of parapsychologists can’t be right. I think there is a good chance I don’t grasp just how inconsistent these claims are with known physics—and that is one of the reasons why fraud/methodology problems/publication bias still dominate my probability space regarding parapsychology. But Carroll hasn’t come close to writing such a case. I think the reason you think he has is that you’re not familiar with a) the actual claims of parapsychologists or b) the various but inconclusive attempts to explain parapsychology results without contradicting the experimental evidence confirming QFT.
The worked example he provides is what physics would require to exist (a new force that is somehow of at least comparable strength to electromagnetism but that has somehow never been detected by experiments so sensitive that they would detect any new force more than a billionth the strength of gravity) for telekinesis to exist at all. And there are indeed parapsychologists who claim telekinesis is worth investigating.
It is not unreasonable for Carroll, having given a worked example of applying extremely well-understood physics to the question, to then expect parapsychologists to then apply extremely well-understood physics to their other questions. His point (as he states in the article) is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.
He doesn’t have to do the worked example for every phenomenon that parapsychology claims is worth serious investigation to make his point valid. Ignoring the existence of relevant known science is one reason parapsychology is a pseudoscience (a partial imitation) rather than science.
I could be wrong, but I think you added to this comment since I replied. Since all of my comments on the topic are getting downvoted without explanation I’ll be short.
And there are indeed parapsychologists who claim telekinesis is worth investigating.
But not spoon bending so much. In any case, being concerned about force fields is only worth while if you assume what is going on is a cause and effect, which many, maybe most of the attempts at explanation don’t.
This is really getting away from what Komponisto and I were talking about. I’m not really disputing the claim that parapsychology is a pseudo-science. I’m disputing the claim that Carroll’s analysis shows that the claims of parapsychology are fundamentally ruled out by current physics. I haven’t really thought about delineation issues regarding parapsychology.
His point is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.
But he gives no evidence that parapsychologists start from this assumption. Plenty of parapsychologists know that no force fields produced by the brain could be responsible for the effects they think they’ve found. Thats sort of their point actually.
There are lots of silly people in the field who think the results imply dualism of course—but thats precisely why it would be nice to have materialists tackle the questions.
There are no significant results from parapsychologists who are aware of physics. Instead, we have results from parapsychologists that claim statistical significance that have obviously defective experimental design and/or (usually and) turn out to be unreplicable.
That is, you describe sophisticated parapsychologists but the prominent results are from unsophisticated ones.
This isn’t someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs. Humans are still confused enough about the world that there is room for change in our current understanding of physics. There are some pretty compelling results in parapsychology, much or all of which may be due to publication bias, methodological issues or fraud. But that isn’t obviously the case, waving our hands and throwing out these words isn’t an explanation of the results. I’m going to try and make a post on this subject a priority now.
This isn’t someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs
Did you read the linked post by Sean Carroll? Parapsychologists aren’t condemned for holding a similar position to the uneducated; they’re condemned for holding a position blatantly inconsistent with quantum field theory on the strength of evidence much, much weaker than the evidence for quantum field theory. Citing a century’s worth of experimentally confirmed physical knowledge is far from hand-waving.
Humans are still confused enough about the world that there is room for change in our current understanding of physics
Again, this is explicitly addressed by Carroll. Physicists are not confused in the relevant regimes here. Strong evidence that certain highly precise models are correct has been obtained, and this constrains where we can reasonably expect future changes in our current understanding of physics.
Now, I’m not a physicist, so if I’m actually wrong about any of this, I’m willing to be corrected. But, as the saying goes, there is a time to confess ignorance, and a time to relinquish ignorance.
Physicists are not confused in the relevant regimes here.
We’re don’t know what the relevant regimes are here. Obviously human brains aren’t producing force fields that are bending spoons.
We have some experimental results. No one has any idea what they mean except it looks like something weird is happening. People are reacting to images they haven’t seen yet and we don’t have any good explanation for these results. Maybe it is fraud (with what motivation?), maybe there are methodological problems (but often no one can find any), maybe there is just publication bias (but it would have to be really high to explain the results in the precognition meta-analysis).
On the other hand, maybe our physics isn’t complete enough to explain what is going on. Maybe a complete understanding of consciousness would explain it. Maybe we’re in a simulation and our creators have added ad hoc rules that violate the laws of physics. Physics certainly rules out some explanations but Carroll certainly hasn’t shown that all but error/fraud/bias have been ruled out.
Btw, using spoon bending as the example and invoking Uri Geller is either ignorant or disingenuous of him (and I almost always love Sean Carroll). Parapsychologists more or less all recognize Geller as a fraud and an embarrassment and only the kookiest would claim that humans can bend spoons with their minds. Real parapsychological experiments are nothing like that.
I suspect it will be difficult to communicate why fraud, method error and publication bias are difficult explanations for me to accept if you aren’t familiar with the results of the field. I recommend Outside the Gates of Science if you haven’t read it yet.
It will actually be easy to communicate exactly what explanation there is for the events. Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years. He has had them do it perfectly methodologically soundly. Only now has he had a group that—through pure, random chance—happened to flip 53% heads and 47% tails. The number of students, the number of coins, the number of flips, all are large enough that this is an unlikely event—but he’s spent eight years trying to make it happen, and so happen it eventually has. Good for him!
The only problem with all of this is that the journals that we take to be sources of knowledge have this rule: anything more unlikely than x, must have some other explanation other than pure chance. This is true at first blush, but when somebody spends years trying to make pure chance spit out the result he wants, this rule fails badly. That is all that’s going on here.
Right, like I said, publication bias is a possibility. But in Honorton’s precognition meta-analysis the results were strong enough that, for them not to be significant, the ratio of unpublished studies averaging null results to published studies would have 46:1. That seems too high for me to be comfortable attributing everything to publication bias. It is this history of results, rather than Bem’s lone study, that troubles me.
Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years.
The paper … is the culmination of eight years’ work by Daryl Bem of Cornell University in Ithaca, New York.
Volunteers were told that an erotic image was going to appear on a computer screen in one of two positions, and asked to guess in advance which position that would be. The image’s eventual position was selected at random, but volunteers guessed correctly 53.1 per cent of the time.
Why do we think this means early test groups weren’t included in the study? It just sounds like it took eight years to get the large sample size he wanted.
I think that it means that early test groups weren’t included because that is the easiest way to produce the results we’re seeing.
It just sounds like it took eight years to get the large sample size he wanted.
Why eight years? Did he decide that eight years ago, before beginning to collect data? Or did he run tests until he got the data he wanted, then check how long it had taken? I am reasonably certain that if he got p-value significant results 4 years into this study, he would have stopped the tests and published a paper, saying “I took 4 years to make sure the sample size was large enough.”
Looking at the actually study it seems to include the results of quite a few different experiments. If he either excluded early tests or continued testing until he got the results he wanted that would obviously make the study useless but we can’t just assume that is what happened. Yes it is likely relative to the likelihood of psi, but since finding out what happened isn’t that hard it seems silly just to assume.
What exactly are Eliezer’s qualifications supposed to be?
You mean, “What are Eliezer’s qualifications?” Phrasing it that way makes it sound like a rhetorical attack rather than a question.
To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.
I’m definitely not trying to attack anyone (and you’re right my comment could be read that way). But I’m also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He’s probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn’t expect any one person to solve a Hard problem. Insight isn’t that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.
No one looks at open problems in other fields this way.
Yes, the situation isn’t normal or good. But this isn’t a balanced comparison, since we don’t currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.
I may have an incorrect impression, but SIAI or at least Eliezer’s department seems to have a self-image comparable to the Manhattan project rather than early pioneers of a scientific field.
Eliezer’s past remarks seem to have pointed to a self-image comparable to the Manhatten project. However, according the new SIAI Overview:
They want to become comparable to the Manhattan project, in part by recruiting additional FAI researchers. They do not claim to be at that stage now.
Eliezer has said: “I have a policy of keeping my thoughts on Friendly AI to the object level, and not worrying about how important or unimportant that makes me.” Your call as to whether you believe that. (The rest of that post, and some of his other posts in that discussion, address some points similar to those that you raised.)
That said, “self-image comparable to the Manhattan project” is an unusually generous ascription of humility to SIAI and Eliezer. :P
I haven’t seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.
Which statement are you talking about? Saying someone is the most likely person to do something is not the same as saying they are likely to do it. You haven’t said anything in this comment than I disagree with so I don’t understand what we’re disputing.
Great comment.
How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky’s publications and all of the LW sequences? You could argue that he and other people don’t have the smarts to grasp Yudkowsky’s arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?
The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.
The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn’t have to be free of people who disagree with it to be influential, and it doesn’t even have to be correct.
Level up first. I can’t evaluate physics research, so I just accept that I can’t tell which of it is correct; I don’t try to figure it out from the politics of physicists arguing with each other, because that doesn’t work.
But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.
Would you tell a politician to go and read the sequences and if, after reading the publications, they don’t see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?
You talked about Yudkowsky’s influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don’t think they influenced the right people.
Downvoted for this:
Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu’s comment is born of motivated cognition to a greater extent than your own comments.
Moreover, I believe that even when such statements are true, one should avoid making them when possible as they’re easily construed as personal attacks which tend to spawn an emotional reaction in one’s conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you’re going wrong, you won’t improve.
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
On this blog, any person should definitely be resisting this push.
I did not say that one should avoid telling people when and where they’re going wrong. I was objecting to the practice of questioning people’s motivations. For the most part I don’t think that questioning somebody’s motivations is helpful to him or her.
I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn’t mean that the commentators are always above this sort of thing.
I agree with you insofar as I think that one work to interpret comments charitably.
I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
Not questioning their motivations; you objected to the practice of pointing out motivated cognition:
Pointing out that someone hasn’t thought through the issue because they are motivated not to—this is not an attack on their motivations; it is an attack on their not having thought through the issue. Allowing people to keep their motivated cognitions out of respect for their motivations is wrong, because it doesn’t let them know that they have something wrong, and they miss a chance to improve it.
To paraphrase steven, if you’re interested in winning disputes you should dismiss personal attacks, but if you’re interested in the truth you should dig through their personal attacks for any possible actual arguments. Whether or not it’s a personal attack, you ought to construe it as if it is not, in order to maximise your chances of finding truth.
Agreed. I think the first two parts of our comments address whether one should exert such a push. I think you’re right, and this whole third part of our discussion is irrelevant.
It’s quite possible to be inaccurate about other people’s motivations, and if you are, then they will have another reason to dismiss your argument.
How do you identify motivated cognition in other people?
Not thinking something through could be habitual sloppiness, repeating what one has heard many times, or not thinking that a question is worthy of much mental energy rather than a strong desire for a particular conclusion. (Not intended as a complete list.)
Making a highly specific deduction from an absence rather than a presence strikes me as especially likely to go wrong.
Some of the same ways I see it in myself. Specifically, when dealing with others:
Opposed to easy (especially quick or instant) tests: strong evidence of motivated stopping.
All for difficult (especially currently-impossible) tests: moderate evidence of motivated continuing.
Waiting on results of specific test to reconsider or take a position: moderate evidence of motivated continuing.
Seemingly-obvious third alternative: very strong evidence of motivated stopping. Caveat! this one is problematic. It is very possible to miss third alternatives.
Opposed to plausible third alternatives: weak evidence of motivated stopping—strong evidence with a caveat and split, as “arguments as soldiers” can also produce this effect. Mild caveat on plausibility being somewhat subjective.
In the case of XiXiDu’s comment, focusing on Ben Goertzel’s rejection is an example of waiting on results from a specific test. That is enough evidence to locate the motivated continuing hypothesis¹ - ie, that XiXiDu does not want to accept the current best-or-accepted-by-the-community answer.
The questions XiXiDu posed afterwards seem to have obvious alternative answers, which suggests motivated stopping. He seems to be stopping on “Something’s fishy about Eliezer’s setup”.
¹: As well as “Goertzel is significantly ahead of AI development curve”, “AGI research and development is a field with rigid formal rules on what does and doesn’t convince people”—the first is easily tested by looking at Ben’s other views, the second is refuted by many researchers in that field
I recommend explaining that sort of thing when you say someone is engaging in motivated cognition.
I think it seems more like a discussable matter then and less like an insult.
Thanks for engaging with me; I now better understand where jimrandomh might have been coming from. I fully agree with Nancy Lebovitz here.
Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer’s stance.
For what it’s worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He’s also responsible for coining “Seed AI”.
Indeed, I was just trying to figure out how someone with money or power who wants to know what is the right thing to do but who does not have the smarts should do. Someone like a politician or billionaire who would either like to support some AGI research or the SIAI. How are they going to decide what to do if all AGI experts tell them that there is no risk from AGI research and that the SIAI is a cult when at the same time the SIAI tells them the AGI experts are intellectual impotent and the SIAI is the only hope for humanity to survive the AI revolution. What should someone who does not have the expertise or smarts to estimate those claims, but who nevertheless has to decide how to use his power, should do? I believe this is not an unrealistic scenario as many rich or powerful people want to do the right thing, yet do not have the smarts to see why they should trust Yudkowsky instead of hundreds of experts.
Interesting, when did he come up with the concept of “Seed AI”. Because it is mentioned in Karl Schroeder’s Ventus (Tor Books, 2000.) ISBN 978-0312871970.
Didn’t find the phrase “Seed AI” there. One plot element is a “resurrection seed”, which is created by an existing, mature evil AI to grow itself back together in case it’s main manifestation is destroyed. A Seed AI is a different concept, it’s something the pre-AI engineers put together that grows into a superhuman AI by rewriting itself more and more powerful. A Seed AI is specifically a method to get to AGI from not having one, not just an AI that grows from a seed-like thing. I don’t remember recursive self-improvement being mentioned with the seed in Ventus.
A precursor concept where the initial AI bootstraps itself by merely learning things, not necessarily by rewriting it’s own architecture, goes all the way back to Alan Turing’s 1950 paper on machine intelligence.
Here is a quote from Ventus:
[...]
...and here’s a quote from I.J. Good, from 1965:
He didn’t coin the term “Seed AI” either.
Yes, but I believe it is a bit weird for a Wikipedia article to state that someone is the originator of the Seed AI theory when he just coined the term. I wasn’t disputing anything, just trying to figure out if it is actually the case that Yudkowsky came up with the concept in the first place.
Not the concept—the term.
“Seed AI theory” probably refers to something or another in here—which did indeed originate with Yu’El.
Presumably http://en.wikipedia.org/wiki/Seed_AI should be considered to be largely SIAI marketing material.
It is further explained that the Winds were designed to evolve on their own so they are not mere puppets of human intentions but possess their own intrinsic architecture.
In other places in the book it is explained how humans did not create their AI Gods but that they evolved themselves from seeds designed by humans.
The Winds are seed AI, in the sense provided by Yudkowsky.
ETA
Well, of course I just tried to figure out of Yudkowsky invented cheesecake and not just some special recipe of cheesecake.
I don’t think the failure of someone to be convinced of some position is ever strong evidence against that position. But this argument here is genuinely terrible. I disagree with person x about y, therefore person x is wrong about z? Do we even have to go into why this is fallacious?
If someone is unable to examine the available evidence and come to a sane conclusion on a particular topic, this makes it less likely that they are able to examine the available evidence and to sane conclusions on other topics.
I don’t take Goertzel seriously for the same reason I don’t take young earth creationists seriously. It’s not that I disagree with him, it’s that his beliefs have almost no connection to reality.
(If it makes you feel better, I have read some of Goertzel’s writing on AGI, and it’s stuffed full of magical thinking.)
I’d be interested to hear more about that.
From Ten Years to a Positive Singularity:
and
From The Singularity Institute’s Scary Idea (And Why I Don’t Buy It):
From Chance and Consciousness:
And pretty much all of On the Algebraic Structure of Consciousness and Evolutionary Quantum Computation.
This is all just from fifteen minutes of looking around his website. I’m amazed anyone takes him seriously.
Oh...
wow.
I think that paper alone proves your point quite nicely.
I mostly disagree with Ben, but I don’t think judging him based on that paper is fair. It’s pretty bad, but it was also written in 1996. Fourteen years is a lot of time to improve as a thinker.
I had that thought too, and I was thinking of retracting or amending my comment to that effect, but looking at some of his later publications in the same journal(?) suggests that he hasn’t leveled up much since then.
“The Futility Of Emergence” really annoys me. It’s a perfectly useful word. It’s a statement about the map rather than about the territory, but it’s a useful one. Whereas magic means “unknowable unknowns”, emergent means “known unknowns”—the stuff that we know follows, we just don’t know how.
e.g. Chemistry is an emergent property of the Schrodinger equation, but calculating anything useful from that is barely in our grasp. So we just go with the abstraction we know, and they’re separate sciences. But we do know we have that work to do.
Just linking to that essay every time someone you’re disagreeing with says “emergent” is difficult to distinguish from applause lights.
Saying the word “emergent” adds nothing. You’re right that it’s not as bad as calling something magic and declaring that it’s inherently unknowable, but it also offers zero explanatory power. To reword your example:
There is absolutely no difference in meaning when you take the word “emergent” out. That’s why it isn’t useful, which Eliezer was pointing out.
Nitpick: I don’t think that is exactly what EY was pointing out. Take a look at the comments and the general response of “Huh? Who makes that mistake?” It seems EY was complaining about the tendency of AGI researchers to use “emergence” as if it were an explanation, not ordinary use of the word that doesn’t pretend it is one but just, say, points out that the behavior is surprising given what it’s composed of, or that your current methods aren’t powerful enough to predict the consequences. He didn’t seem to have realized that particular mistake was mostly localized to AGI people.
It seems more likely that when the cited people said “intelligence is an emergent phenomenon”, they were misunderstood as proposing that as a satisfactory explanation of the phenomenon.
Nitpick accepted.
I’m not entirely sure this is correct. I wouldn’t call the trajectories of planets and galaxies “properties” of Relativity, but I would call it emergent behavior due to Relativity. It’s a stylistic and grammatical choice, like when to use “which” and when to use “that.” They may seem the same to the uninitiated, but there’s a difference and the initiated can tell when you’re doing it wrong.
So, I agree with David Gerard that trying to eradicate the use of the word is misplaced. It’d be like saying “the word ‘which’ is obsolete, we’re only going to use ‘that’ and look down on anyone still using ‘which’.” You lose far more by such a policy than you gain.
IIRC, that post was adequately dismantled in its comments.
From what I’ve seen, the people who comment here who have read Broderick’s book have come away, if not convinced psy describes some real physical phenomena, convinced that the case isn’t at all open and shut the way young earth creationism is. When an issue is such that smart, sane people can disagree then you have to actually resolve the object level disagreement before you can use someone’s beliefs on the issue in a general argument about their rationality. You can’t just assume it as you do here.
Yes, here WrongBot is safe to assume basic physics.
Edit for the sake of technical completeness: And biology.
Goertzel’s paper on the subject is about extending the de Broglie Bohm pilot wave theory in a way that accounts for psi while being totally consistent with all known physics. Maybe it is nonsense, I haven’t read it. But you can’t assume it is.
I disagree. I do not need to (and should not) discard my priors when evaluating claims.
It would be an error in reasoning on my part if I did not account for the low prior (to reading it) probability of a psyonics theory being sane when evaluating the proponents other claims. For emphasis: not lowering my confidence in Goertzel’s other beliefs because he is a proponent of psi without me having read his paper would be an outright mistake.
I also note that you defending Goertzel on the psi point is evidence against Goertzel’s beliefs regarding AI. Extremely weak evidence.
Huh?
I mean what is written in the straightforward English sense. I mention it to emphasize that all evidence counts.
Could you unpack your reasoning? Do you mean that Jack defending Goertzel on psi discredits defense of Goertzel on AI because it shows such defense to be less correlated to the validity of the opinion than previously thought? Or did you drop a negation or something and mean the opposite of what you wrote, because Jack defending Goertzel on psi is very slight evidence of Goertzel’s opinion on psi not being as crazy as you previously thought?
Ever is a strong word. If a competent expert in a field who has a known tendency to err slightly on the side of too much openness to the cutting edge fails to be convinced by a new finding within his field that says an awful lot.
That is simply not the form of the argument you quote. “Ben Goertzel believes in psychic phenomenon” can not be represented as “I disagree with person x ”.
I’m being generous and giving the original comment credit for an implicit premise. As stated the argument is “Person x believes y, therefore person x is wrong about z.” this is so obviously wrong it makes my head hurt. WrongBot’s point is that someone has to have a poor reasoning capacity to believe in psy. But since he didn’t provide any evidence to that effect it reduces to ‘I disagree with Goertzel about psy’.
Fair point re: “ever”.
I generally don’t try to provide evidence for every single thing I say, and I am especially lax about things that I consider to be incredibly obvious.
But I’m annoyed enough to lay out a very brief summary of why belief in PSI is ludicrous:
It isn’t permitted by known physics.
There are no suggested mechanisms (so far as I’m aware) for PSI which do not contradict proven physical laws.
The most credible studies which claim to demonstrate PSI have tiny effect sizes, and those haven’t been replicated with larger sample sizes.
Publication bias.
PSI researchers often seem to possess motivated cognition.
We’ve analyzed the functioning of individual neurons pretty closely. If there are quantum microtubules or other pseudoscientific nonsense in them, they don’t seem to affect how those individual neurons behave.
Etc.
No one has to give evidence for everything they say but when things that you thought were obviously wrong begin to get defended by physics-literate reductionist materialists that seems like a good time to lower your confidence.
Well to begin with, Goertzel’s paper claims to be such a mechanism. Have you read it? I don’t know if it works or not. Seems unwise to assume it doesn’t though.
Publication bias, motivated cognition and effect size are all concerns and were my previous explanation. I found this meta-analysis upset that view for me.
Oh man! I left out the most important objection!
If PSI exploits weird physics in a complicated manner and produces such tiny effects, where the hell did the mechanism come from? PSI would obviously be a very useful adaptation, so why don’t we see it in other species? Why aren’t the effects stronger, since there’s such a strong evolutionary pressure in favor of them?
Goertzel’s paper also includes psychokinesis as a PSI phenomenon supported by strong evidence. I would love to see the study he’s talking about for that one. Or a video.
All of this is also discussed in Outside the Gates. I can try to dig up what he said this weekend.
The experiments aren’t macroscopic. The results involve statistical deviations from expected normal distributions of say, white noise generators when participants try to will the results in different directions. I don’t think these results are nearly as compelling as other things, see Jahn and Dunne 2005 for example. They had some methodological issues and the one attempt that was made at replication, while positive, wasn’t significant at anywhere near the level of the original.
If you’re actually interested you should consider checking out the book. It is a quick, inexpensive read. Put it this way: I’m not some troll who showed up here to argue about parapsychology. Six months ago I was arguing your position here with someone else and they convinced me to check out the book. I then updated significantly in the direction favoring psi (not enough to say it exists more likely than not, though). Everything you’ve said is exactly what I was saying before. It turns out that there are sound responses to a lot of the obvious objections, making the issue not nearly as clear cut as I thought.
It would be wrong if it were a logical deduction instead of an inference. That is, if WrongBot actually wrote ‘therefore’ or otherwise signaled absolute deductive certainty then he would be mistaken. As is he presents it as evidence, which it in fact is.
There is a clear implied premise ‘psychic phenomenon are well known to be bullshit’. Not all baseline premises must be supported in an argument. Instead, the argument should be considered stronger or weaker depending on how reliable the premises are. I don’t think WrongBot loses too much credibility in this case by dismissing psychic phenomenon.
It isn’t even evidence until you include a premise about the likelihood of y, which we agree is the implied premise.
I think I’m just restating the exchange I had with komponisto on this point. Goertzel’s position isn’t that of someone who is doesn’t know any physics or Enlightenment-style rationality. It is clearly a contrarian position which should be treated rather differently since we can assume he is familiar with the reasons why psychic phenomena are ‘well known to be bullshit’. It is a fully generalizable tactic which can be used against all and any contrarian thinkers. Try “Robin Hanson thinks we should cut health care spending 50%, therefore he is less likely to be right about fertility rate.”
This is obviously going to be the case when trying to convince an individual of something. The beliefs (crackpot or otherwise) of the target audience are always going to be relevant to persuasively. As a comment directed in part to the wider lesswrong audience the assumed premises will be different.
If I were a reader who thought Robin’s position on health care was as implausible as belief in magic and thought that making claims about the fertility was similar to AI strategy then I would take this seriously. As it stands the analogy is completely irrelevant.
The extent to which it is fallacious depends rather strongly on what y and z (and even x) are, it seems to me.
Any argument of this nature needs to include some explanation of why someone’s ability to think about y is linked to their ability to think about z. But even with that (which wasn’t included in the comment) you can only conclude that y and z imply each other. You can’t just conclude z.
In other words, you have to show Goertzel is wrong about psychic phenomenon before you can show that his belief in it is indicative of reasoning flaws elsewhere.
I don’t disagree in principle, but psychic phenomena are pretty much fundamentally ruled out by current physics. So a person’s belief in them raises serious doubts about that person’s understanding of science at the very least, if not their general rationality level.
I got the impression from Damien Broderick’s book that a lot of PSI researchers do understand physics and aren’t postulating that PSI phenomena use the sort of physical interactions gravity or radio waves use. There’s a story that Einstein was interested in PSI research, but declared it nonsense when the claimed results showed PSI effects that weren’t subject to the inverse square law, so this isn’t a new idea.
Damien Broderick’s attitude in his book is basically that there’s a bunch of anomalous observations and neither a satisfactory explanation or, in his opinion, a refutation for them exists. Goertzel’s attitude is to come up with a highly speculative physical theory that could explain that kind of phenomena, and which would take a bit more than “would need extra particles” to show as nonsense.
“Not understanding basic physics” doesn’t really seem to cut it in either case. “It’s been looked into by lots of people, a few of them very smart, for 80 years, and nothing conclusive has come out of it, so most likely there isn’t anything in it, and if you still want to have a go, you better start with something the smart people in 1970s didn’t have” is basically the one I’ve got.
I’m not holding my breath over the recent Bem results, since he seems to be doing pretty much the same stuff that was done in the 70s and always ended up failing one way or the other, but I’m still waiting for someone more physics-literate to have a go at Goertzel’s pilot wave paper.
“Not understanding basic physics” sounds like a harsh quasi-social criticism, like “failing at high-school material”. But that’s not exactly what’s meant here. Rather, what’s meant is more like “not being aware of how strong the evidence against psi from 20th-century physics research is”.
The Bayesian point here is that if a model M assigns a low probability to hypothesis H, then evidence in favor of M is evidence against H [EDIT: technically, this is not necessarily true, but it usually is in practice, and becomes more likely as P(H|M) approaches 0]. Hence each high-precision experiment that confirms quantum field theory counts the same as zillions of negative psi studies.
Evidence distinguishes between not for individual models. There may be models that are consistent with the experiments that confirm quantum field theory but also give rise to explanations for anomalous cognition.
By the Bayesian definition of evidence, “evidence for” a hypothesis (including a “model”, which is just a name for a complex conjunction of hypotheses) simply means an observation more likely to occur if the hypothesis is true than if it is false.
Carroll claims that current data implies the probability of such models being correct is near zero. So I’d like to invoke Aumann here and ask what your explanation for the disagreement is. Where is Carroll’s (and others’) mistake?
If models are just complex conjunctions of hypotheses then the evidence that confirms models will often confirm some parts of the model more than others. Thus the evidence does little to distinguish the model from a different model which incorporates slightly different hypotheses.
That is all I meant.
Yes, but this depends on what other hypotheses are considered in the “false” case.
The “false” case is the disjunction of all other possible hypotheses besides the one you’re considering.
That’s not computable. (EDIT: or even well defined). One typically works with some limited ensemble of possible hypotheses.
Explicitly, that may be the case; but at least implicitly, there is always (or at least there had better be) an additional “something not on this list” hypothesis that covers everything else
You appear to be thinking in terms of ad-hoc statistical techniques (“computable”, “one typically works...”), rather than fundamental laws governing belief. But the latter is what we’re interested in in this context: we want to know what’s true and how to think, not what we can publish and how to write it up.
Let me put it this way: excluding a hypothesis from the model space is merely the special case of setting its prior to zero. Whether a given piece of evidence counts for or against a hypothesis is in fact dependent on the priors of all other hypotheses, even if no hypothesis goes from possible to not or vice-versa.
As this is prior dependent, there is no objective measure of whether a hypothesis is supported or rejected by evidence.
(This is obviously true when we look at P(H_i|e). It’s a bit less so when we look at P(e|H) vs P(e|~H). This seems objective. It is objective in the case that H and ~H are atomic hypotheses with a well-defined rule for getting P(e|~H). But if ~H is an or of “all the other theories”, than P(e|~H) is dependent on the prior probabilities for each of the H_i that are the subcomponents of ~H. It’s also an utterly useless by itself for judging H. We want to know P(H|e) for that. (P(e|H) is of course why we want P(H), so we can make useful predictions.)
It is true that in the long run much evidence will eventually dominate any prior. But summarizing this as “log odds”, for instance is only useful for talking about comparing two specific hypotheses, not “this hypothesis” and “everything else”.
But I still have objections to most you say.
You’ve given an essentially operational definition of “evidence for” in terms of operations that can’t be done.
Yes. The standard way to express that is that you can’t actually work with P(Hypothesis), only P(Hypothesis | Model Space).
You can then, of course expand your model spaces, if you find your model space is inadequate.
“Computable” is hardly ad-hoc. It’s a fundamental restriction on how it is possible to reason.
If you want to know how to think, you had better pick a method that’s actually possible.
This really is just another facet of “all Bayesian probabilities are conditional.”
And you shouldn’t do that.
Yes, of course. The point is that if you’re using probability theory to actually reason, and not merely to set up a toy statistical model such as might appear in a scientific paper, you will in fact already be “considering” all possible hypotheses, not merely a small important-looking subset. Now it’s true that what you won’t be doing is enumerating every possible hypothesis on the most fine-grained level of description, and then computing the information-theoretic complexity of each one to determine its prior—since, as you point out, that’s computationally intractable. Instead, you’ll take your important-looking subset just as you would in the science paper, let’s say H1, H2, and H3, but then add to that another hypothesis H4, which represents the whole rest of hypothesis-space, or in other words “something I didn’t think of”/”my paradigm is wrong”/etc. And you have to assign a nonzero probability to H4.
No, see above. In science papers, “paradigm shifts” happen, and you “change your model space”. Not in abstract Bayesianism. In abstract Bayesianism, low-probability events happen, and you update accordingly. The result will look similar to “changing your model space”, because what happens is that when H4 turns out to be true (i.e. its probability is raised to something high), you then to start to carve up the H4 region of hypothesis-space more finely and incorporate these “new” sub-hypotheses into your “important-looking subset”.
To return to the issue at hand in this thread, here’s what’s going on as I see it: physicists, acting as Bayesians, have assigned very low probablity to psi being true given QFT, and they have assigned a very high probability to QFT. In so doing, they’ve already considered the possibility that psi may be consistent with QFT, and judged this possibility to be of near-negligible probability. That was done in the first step, where they said “P(psi|QFT) is small”. It doesn’t do to reply “well, their paradigm may be wrong”; yes, it may, but if you think the probability of that is higher than they do, then you have to confront their analysis. Sean Carroll’s post is a defense of the proposition that “P(psi|QFT) is small”; Jack’s comment is an assertion that “psi&QFT may be true”, which sounds like an assertion that “P(psi|QFT) is higher than Sean Carroll thinks it is”—in which case Jack would need to account somehow for Carroll being mistaken in his analysis.
This is basically my position. ETA: I may assign a high probability to “not all of the hypotheses that make up QFT are true” a position I believe I can hold while not disputing the experimental evidence supporting QFT (though such evidence does decrease the probability of any part of QFT being wrong).
I don’t think Carroll’s analysis comes close to showing that P(psi|QFT) is 1 in a billion. He took one case, a psychokinesis claim that no one in parapsychology endorses and showed how it was impossible given one interpretation of what the claim might mean. We can’t look at his analysis and take it as convincing evidence that the claims of parapsychologists aren’t consistent with QFT since Carroll doesn’t once mention any of the claims made by parapsychologists!
Now there are some studies purporting to show psychokinesis (though they are less convincing than the precognition studies and actually might just be a kind of precognition). Even in these cases no one in parapsychology thinks the perturbations are the result of EM or gravitational fields; Carroll pointing out that they can’t shouldn’t result in us updating on anything.
I actually think a physicist might be able to write a convincing case for why the claims of parapsychologists can’t be right. I think there is a good chance I don’t grasp just how inconsistent these claims are with known physics—and that is one of the reasons why fraud/methodology problems/publication bias still dominate my probability space regarding parapsychology. But Carroll hasn’t come close to writing such a case. I think the reason you think he has is that you’re not familiar with a) the actual claims of parapsychologists or b) the various but inconclusive attempts to explain parapsychology results without contradicting the experimental evidence confirming QFT.
The worked example he provides is what physics would require to exist (a new force that is somehow of at least comparable strength to electromagnetism but that has somehow never been detected by experiments so sensitive that they would detect any new force more than a billionth the strength of gravity) for telekinesis to exist at all. And there are indeed parapsychologists who claim telekinesis is worth investigating.
It is not unreasonable for Carroll, having given a worked example of applying extremely well-understood physics to the question, to then expect parapsychologists to then apply extremely well-understood physics to their other questions. His point (as he states in the article) is that they keep starting from an assumption that science knows nothing relevant to the questions parapsychologists are asking, rather than starting from an assumption that known science could be used to make testable, falsifiable predictions.
He doesn’t have to do the worked example for every phenomenon that parapsychology claims is worth serious investigation to make his point valid. Ignoring the existence of relevant known science is one reason parapsychology is a pseudoscience (a partial imitation) rather than science.
I could be wrong, but I think you added to this comment since I replied. Since all of my comments on the topic are getting downvoted without explanation I’ll be short.
But not spoon bending so much. In any case, being concerned about force fields is only worth while if you assume what is going on is a cause and effect, which many, maybe most of the attempts at explanation don’t.
This is really getting away from what Komponisto and I were talking about. I’m not really disputing the claim that parapsychology is a pseudo-science. I’m disputing the claim that Carroll’s analysis shows that the claims of parapsychology are fundamentally ruled out by current physics. I haven’t really thought about delineation issues regarding parapsychology.
But he gives no evidence that parapsychologists start from this assumption. Plenty of parapsychologists know that no force fields produced by the brain could be responsible for the effects they think they’ve found. Thats sort of their point actually.
There are lots of silly people in the field who think the results imply dualism of course—but thats precisely why it would be nice to have materialists tackle the questions.
There are no significant results from parapsychologists who are aware of physics. Instead, we have results from parapsychologists that claim statistical significance that have obviously defective experimental design and/or (usually and) turn out to be unreplicable.
That is, you describe sophisticated parapsychologists but the prominent results are from unsophisticated ones.
Cite?
ETA: Bem, for example, whose study initiated this discussion has a BA and did graduate work in physics.
This isn’t someone with tarot cards talking about using crystal energy to talk to your dead grand parent. To condemn someone for holding a similar position to the uneducated is to rule out contrarian thought before any debate occurs. Humans are still confused enough about the world that there is room for change in our current understanding of physics. There are some pretty compelling results in parapsychology, much or all of which may be due to publication bias, methodological issues or fraud. But that isn’t obviously the case, waving our hands and throwing out these words isn’t an explanation of the results. I’m going to try and make a post on this subject a priority now.
Did you read the linked post by Sean Carroll? Parapsychologists aren’t condemned for holding a similar position to the uneducated; they’re condemned for holding a position blatantly inconsistent with quantum field theory on the strength of evidence much, much weaker than the evidence for quantum field theory. Citing a century’s worth of experimentally confirmed physical knowledge is far from hand-waving.
Again, this is explicitly addressed by Carroll. Physicists are not confused in the relevant regimes here. Strong evidence that certain highly precise models are correct has been obtained, and this constrains where we can reasonably expect future changes in our current understanding of physics.
Now, I’m not a physicist, so if I’m actually wrong about any of this, I’m willing to be corrected. But, as the saying goes, there is a time to confess ignorance, and a time to relinquish ignorance.
We’re don’t know what the relevant regimes are here. Obviously human brains aren’t producing force fields that are bending spoons.
We have some experimental results. No one has any idea what they mean except it looks like something weird is happening. People are reacting to images they haven’t seen yet and we don’t have any good explanation for these results. Maybe it is fraud (with what motivation?), maybe there are methodological problems (but often no one can find any), maybe there is just publication bias (but it would have to be really high to explain the results in the precognition meta-analysis).
On the other hand, maybe our physics isn’t complete enough to explain what is going on. Maybe a complete understanding of consciousness would explain it. Maybe we’re in a simulation and our creators have added ad hoc rules that violate the laws of physics. Physics certainly rules out some explanations but Carroll certainly hasn’t shown that all but error/fraud/bias have been ruled out.
Btw, using spoon bending as the example and invoking Uri Geller is either ignorant or disingenuous of him (and I almost always love Sean Carroll). Parapsychologists more or less all recognize Geller as a fraud and an embarrassment and only the kookiest would claim that humans can bend spoons with their minds. Real parapsychological experiments are nothing like that.
I suspect it will be difficult to communicate why fraud, method error and publication bias are difficult explanations for me to accept if you aren’t familiar with the results of the field. I recommend Outside the Gates of Science if you haven’t read it yet.
It will actually be easy to communicate exactly what explanation there is for the events. Bem has effectively been getting a group of students to flip a bunch of coins for the last eight years. He has had them do it perfectly methodologically soundly. Only now has he had a group that—through pure, random chance—happened to flip 53% heads and 47% tails. The number of students, the number of coins, the number of flips, all are large enough that this is an unlikely event—but he’s spent eight years trying to make it happen, and so happen it eventually has. Good for him!
The only problem with all of this is that the journals that we take to be sources of knowledge have this rule: anything more unlikely than x, must have some other explanation other than pure chance. This is true at first blush, but when somebody spends years trying to make pure chance spit out the result he wants, this rule fails badly. That is all that’s going on here.
Right, like I said, publication bias is a possibility. But in Honorton’s precognition meta-analysis the results were strong enough that, for them not to be significant, the ratio of unpublished studies averaging null results to published studies would have 46:1. That seems too high for me to be comfortable attributing everything to publication bias. It is this history of results, rather than Bem’s lone study, that troubles me.
What evidence is there for this?
From here,
Why do we think this means early test groups weren’t included in the study? It just sounds like it took eight years to get the large sample size he wanted.
I think that it means that early test groups weren’t included because that is the easiest way to produce the results we’re seeing.
Why eight years? Did he decide that eight years ago, before beginning to collect data? Or did he run tests until he got the data he wanted, then check how long it had taken? I am reasonably certain that if he got p-value significant results 4 years into this study, he would have stopped the tests and published a paper, saying “I took 4 years to make sure the sample size was large enough.”
Looking at the actually study it seems to include the results of quite a few different experiments. If he either excluded early tests or continued testing until he got the results he wanted that would obviously make the study useless but we can’t just assume that is what happened. Yes it is likely relative to the likelihood of psi, but since finding out what happened isn’t that hard it seems silly just to assume.