Feed the spinoff heuristic!
Follow-up to:
Parapsychology: the control group for science
Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields
Recent renewed discussions of the parapsychology literature and Daryl Bem’s recent precognition article brought to mind the “market test” of claims of precognition. Bem tells us that random undergraduate students were able to predict with 53% accuracy where an erotic image would appear in the future. If this effect was actually real, I would rerun the experiment before corporate earnings announcements, central bank interest rate changes, etc, and change the images based on the reaction of stocks and bonds to the announcements. In other words, I could easily convert “porn precognition” into “hedge fund trillionaire precognition.”
If I was initially lacking in the capital to do trades, I could publish my predictions online using public key cryptography and amass an impressive track record before recruiting investors. If anti-psi prejudice was a problem, no one need know how I was making my predictions. Similar setups could exploit other effects claimed in the parapsychology literature (e.g. the remote viewing of the Scientologist-founded Stargate Project of the U.S. federal government). Those who assign a lot of credence to psi may want to actually try this, but for me this is an invitation to use parapsychology as control group for science, and to ponder a general heuristic for crudely estimating the soundness of academic fields for outsiders.
One reason we trust that physicists and chemists have some understanding of their subjects is that they produce valuable technological spinoffs with concrete and measurable economic benefit. In practice, I often make use of the spinoff heuristic: If an unfamiliar field has the sort of knowledge it claims, what commercial spinoffs and concrete results ought it to be producing? Do such spinoffs exist? What are the explanations for their absence?
For psychology, I might cite systematic desensitization of specific phobias such as fear of spiders, cognitive-behavioral therapy, and military use of IQ tests (with large measurable changes in accident rates, training costs, etc). In financial economics, I would raise the hundreds of billions of dollars invested in index funds, founded in response to academic research, and their outperformance relative to managed funds. Auction theory powers tens of billions of dollars of wireless spectrum auctions, not to mention evil dollar-auction sites.
This seems like a great task for crowdsourcing: the cloud of LessWrongers has broad knowledge, and sorting real science from cargo cult science is core to being Less Wrong. So I ask you, Less Wrongers, for your examples of practical spinoffs (or suspicious absences thereof) of sometimes-denigrated fields in the comments. Macroeconomics, personality psychology, physical anthropology, education research, gene-association studies, nutrition research, wherever you have knowledge to share.
ETA: This academic claims to be trying to use the Bem methods to predict roulette wheels, and to have passed statistical significance tests on his first runs. Such claims have been made for casinos in the past, but always trailed away in failures to replicate, repeat, or make actual money. I expect the same to happen here.
- Using degrees of freedom to change the past for fun and profit by 7 Mar 2012 2:51 UTC; 64 points) (
- What’s your favorite LessWrong post? by 21 Feb 2019 10:39 UTC; 27 points) (
- 14 Feb 2016 11:05 UTC; 8 points) 's comment on Why and how to assess expertise by (EA Forum;
If psychology worked, I would expect marketing firms to use it to make millions of people buy tons of shit that they don’t need and that won’t make them happy.
Is there any evidence, one way or the other, as to whether marketers draw useful info from academic psychology?
More cheap evidence: marketing textbooks are stuffed full of mainstream psychological results and applications to the business of marketing.
Cheap evidence: Hacker News is full of people trying to get rich by selling something (usually access to web applications), and e.g. “Predictably irrational” has been mentioned. The marketing guru Seth Godin says he’s been influenced quite a bit by Poundstone’s “Priceless”, which apparently “dives into the latest psychological findings”.
Of course, this is only informal evidence, only shows “some marketers” and only shows “believed to be useful”.
Waveman’s comment also seems relevant.
Indeed. Case study of Freud’s nephew who basically invented modern PR.
http://en.wikipedia.org/wiki/Edward_Bernays
Don’t they?
Yes they do. That was my intended meaning. :)
I believe marketers do use psychology and many, if not most, Americans do buy “tons of shit that they don’t need and that won’t make them happy!”
I believe Luke intended this to be understood =)
Well I think to large extent marketing firms rely on their own know-how, which I imagine is rather scientific. I have first hand experience with this (I am selling a computer game through Steam). Various statistics is used to see what does better. Their marketing people are really great at e.g. picking the most-clickable banner design, versus the one that I thought would be the most clickable (I did my own stats and confirmed their choice).
Shorter version of OP’s argument.
If personality psychology holds water, I would expect dating sites to use it and produce better results than Traditional Romance. Does it? From the outside looking in, it looks like it does.
It would also be useful in selecting dorm room compatibility, which I can tell you from the inside looking out does not work at all or isn’t being used. I wouldn’t expect it to be used in this context, though. No money in it.
Contrary evidence:
I don’t know if any of the dating sites they reviewed use a similar system to OkCupid (users answer questions and also pick how they want matches to answer those questions and how important they are to them,) but I don’t think OkCupid was included in that study. The author wrote that the matching algorithms of the companies they reviewed are proprietary, and were not shared with the researchers, but OkCupid’s matching algorithm is publicly available.
That’s a rather strong claim. Matching people up completely at random can work in principle.
Perhaps by “work” they meant “do better than letting people choose solely based on reading a short essay and seeing a picture,” although that sounds difficult to make precise. Maybe just “do better than random.” We might have to wait until they publish.
Again, it’s the “even in principle” I was objecting to. Picking people at random can in principle do better than letting people choose solely based on reading a short essay and seeing a picture. And uniformly random algorithm A can in principle do better than uniformly random algorithm B.
Saying something isn’t possible “even in principle” specifically means that it cannot happen in any logically possible world—that’s the entire difference between saying “even in principle” and leaving it out. It can’t even accidentally win.
This week’s issue of The Economist has a summary of the scientific evidence behind the popular Internet dating websites.
I don’t think OKCupid contains a good way of tracking long-term romantic success once a relationship escapes from the site, but it certainly has the data to correlate any one of several personality metrics with length of correspondence, which strikes me as a half-decent proxy: there’s a huge library of personality tests on the site, including some well-known ones like the MBTI and the Big 5. OKTrends has almost certainly touched on this before, although you’d probably have to apply a lot of logical glue yourself to get a theory to stick together properly.
OKC’s primary metric, however, relies on self-selected answers to a large pool of crowdsourced questions. If there’s been any academic research done in that exact space I’m not aware of it, but it wouldn’t be too much of a stretch to view correlations between match metrics and actual romantic success as answering the question “how well do people know their own romantic preferences?”—or conversely to see academic answers to that question as informing OKC’s methodology.
There’s interesting thing: Some people managed to acquire a lot of wealth via trading. That would lead you to believe their claims with regards to the methods they use being effective.
However, if you simulate the stock market with identically skilled agents, you obtain basically same wealth distribution as observed in the real world, with some few agents ending up extremely ‘rich’. One can imagine that such agents, if they were people, would rationalize their undeserved wealth to feel better about themselves.
The problem of the silent cemetery(sample bias?). If we start with a large enough cohort of “equally skilled” traders, who just make their investment by random, we will still end up with a handful of “old foxes” who just standing because of their proportional luck. In the meantime the failed ones laying in the cemetery silently as nobody asks them.
Of course the lucky one’s skills will be rationalized(narrative fallacy in Taleb’s term) and not just by themselves but the majority around them, the media etc..
I would add to this that having a method would often (but not always) produce a clear “leader in the field” (first-mover advantage going to the discoverer). So seeing Google’s share of the market is a strong indicator (even without first-hand knowledge) that “they have a serious advantage in search” vs existence of many competing diet companies does not tell me “they figured out nutrition”.
Good point, but to nitpick Google wasn’t a first-mover in search, it defeated AltaVista and other search competitors based on superior performance. They were a first-mover with PageRank, though.
Yes, thanks for clarification, that’s what I meant by the first mover: relative to “the thing that gives them a lot of power”.
Two issues with this heuristic:
1) It doesn’t work well for credence goods.
2) Sometimes it takes a long time for sciences to find an application, two modern examples are astrophysics, and particle physics.
(2) is a useful point, but doesn’t generalize fully. To take your own examples, if some theories in astrophysics and particle physics were extremely well supported by the standards of physics, then the lack of spinoffs would not undermine them very much. If the theories are well supported, then they’ve made lots of novel predictions that have been verified. That a particular spinoff works is just evidence that a particular novel prediction is verified.
Today, the many spinoffs of physics in general can lend support to branches that haven’t produced spinoffs yet. But what about the first developments in physics? How soon after Newton’s laws were published did anyone use them for anything practical? Or how long did it take for early results in electromagnetics (say, the Coulomb attraction law) to produce anything beyond parlor tricks? I don’t know the answers here, and if there were highly successful mathematical engineers right on Newton’s heels, I’d be fascinated to hear about it, but there very well may not have been.
Of course, theory always has to precede spinoffs; it would make no sense to reject a paper from a journal due to lack of spinoffs. To use the heuristic, we need some idea of how long is a reasonable time to produce spinoffs. If there is such a “spinoff time,” it probably varies with era, so fifty years might have been a reasonable delay between theory and spinoff in the seventeenth century but not in the twenty-first.
Tetlock’s political judgment study was a test for macroeconomics, political science and history. Yet people with PhDs in these areas did no better on predicting macro political and economic events than those without any PhD. Maybe macro helps in producing good econometric models, but it doesn’t help in making informal predictions. (Whereas one suspects that physics and chemistry would help in a test of quick predictions about a novel physical or chemical system, vs. people without a PhD in these fields).
Another analogy is that having a PhD in the relevant sciences doesn’t help you play sports.
In some sports, applied science seems important to improving expert performance. The PhD knowledge is used to guide the sportsperson (who has exceptional physical abilities). Likewise, our skill at making reliably sturdy buildings has dramatically improved due to knowledge of physics and materials science. But the PhDs don’t actually put the buildings up, they just tell the builders what to do.
I can’t find the references now, but I have seen several stories about sports (specifically, some football teams in Australia) using psychology and other scientific knowledge (and improving because of it).
Well, some disciplines are a bit too hard for humans to actually reason about (such as predicting complex interactions of many people), the demand for something that looks like science results in a supply of pseudoscience. That was the case for medicine through history until relatively recently—very strong demand for some solutions, lack of any genuine solutions, resulting in a situation where frauds and self deception are the best effort.
With the economics, perhaps an extremely intelligent individual may be able to make interesting predictions, but the individual as intelligent as most of the traders can’t predict anything interesting. The ‘political science’ is altogether non-scientific discipline that calls itself science and thus is even worse than garden variety pre-science which is at least scientific enough to see how unscientific it is.
The history would only help predictively if the agents (politicians, etc) were really unaware of history and if little changed since closest precedent, which isn’t at all the case.
Re: your examples successful spin-offs for psychology, to what extent did these therapies come out of well-established theory? Maybe someone can weigh in here. It seems possible that these are good therapies but ones that don’t have a strong basis in theory (in contrast to technologies from physics or chemistry).
While cognitive-behavioral therapy could in some ways be characterized as an offshoot of the philosophy known as Stoicism (which oddly seems to have “lucked into” quite a set of effective beliefs, especially when compared to most other philosophies) rather than an offshoot of psychology, the psychological research process and psychological theory as a whole have definitely acted to inform and refine CBT.
I was looking for someone to specify a well supported psychological theory that predicts that CBT should be effective. What’s the theory, and what’s the evidence that people believed it before CBT came along?
I also think Shulman’s example of IQ is different from the physics/chemistry case. It was discovered that scores on a short IQ test predicted long-term job performance on a range of tasks. Organizations that used IQ in hiring were then able to obtain better long-term job performance. But IQ was not something that was predicted from a model of how the brain or mind works. Even now, a century after the development of IQ tests, I’m not sure we have a good bottom up account of why a few little reasoning questions can be as informative about human cognitive performance as IQ seems to be. (Not saying that IQ gives you all the information you want, but a few short questions provide a surprising amount of information).
The issue here is that the theory that predicts that CBT should be effective is called “Stoicism” and has been around for a long while prior to the concept of a psychological research process.
If you are looking for a therapy or action that arose from psychological theory directly, I would recommend looking into the treatment of PTSD (not even recognized as a treatable condition until the 1970s) or something—CBT has been informed and refined by the research process, but its underpinnings existed prior to the research process itself.
Physicist Ilya Prigogine developed his famous theory of dissipative systems which was expected to explain a lot of things from thermodynamics of living systems to the nature of the arrow of time. It is a very well-developed and deep theory. Yet, in my scientific life, I have never seen an actual numerical calculation of a measurable quantity utilizing any of Prigogine’s concepts such as “rate of entropy production”. Looks definitely like a missing spinoff!
People do use thermodynamics. Are you in a position to say whether Priogine’s work is ever relevant to professional chemical engineers?
That’s the point: what people use is normal equilibrium or close-to-equilibrium thermodynamics. Even in situations that seem far out of the scope of equilibrium thermodynamics and where one would normally expect Prigogine physics to be the perfect candidate—one example being CVD or VLS growth of various nanotubes/nanowires/etc. - I have never seen the latter applied. Everybody just goes with good old (near-)equilibrium chemical thermodynamics. Now this might be just a manifestation of Maslow’s hammer, and Prigogine physics is hard, but for what it’s worth, here’s one example of a big hole that should be covered by the theory but is, in fact, not.
Computer vision is suspiciously lacking in practical spinoffs, even though people have been studying it for 40 years.
This is no longer true.
Really? Flashy stuff like Word Lens is rare, but stuff like more prosaic OCR, increasingly automated consumer photography, and face-recognizing CCTV seems to be economically effective.
I certainly agree that there are unsolved difficulties in computer vision with probably profitable solutions which may be Hard Problems, though.
I really like this. It emphasizes the fundamentally instrumental nature of rationality.
This is a nitpick, but this protocol is at least underspecified. Aside from the need to prove that you made the predictions before the events, you also need to be able to prove that you made no other predictions before the event.
(I’ve always wondered why no pump-and-dump scammers use this: after ten “buy/short ” mails, 1/1024 of your mailing list will have received 10⁄10 correct predictions from you (and another 10/1024 will have received 9⁄10 correct predictions.) Which should be enough to convince quite a few to buy up some penny stock (with the scammer taking the other, profitable side of the trade.) In the spirit of this post, it’s probably not profitable enough. Or spammers are stupid.)
They used to, in the days of snail-mail, and that scam became one of the common examples of selection bias and other issues because it’s so nifty; why don’t they with email? Probably difficulty of getting through, as pointed to.
I wonder if scammers know that you can still send snail mail.
Getting someone to receive 11 mails in a row is hard, because of immune responses to spam. Getting someone to actually read those mails is hard, for a similar reason. Needing a large number of recipients and using stock-related terminology both make it harder. And then, even if you got through defenses and actually convinced people that you could correctly predict stock prices, most of them still wouldn’t do anything about it.
Thanks for the stamper link, I was hoping something like that existed.
The latter could be helped by some stamping service that would allow you to include your name in the stamping request, with some publicly available provision for finding out how many requests someone made in a time period. If Carl actually attached “Carl Shulman” to the request, and to no others, and we had independent reason to believe that was his True Name, we could assume he wasn’t running the 10^1024 scam.
Typo.
You’re looking for people smart enough to understand the scam and dumb enough to fall for it. That seems much less profitable than existing scams.
Right, the thought would be to do this in public fashion, so that recipients can search for other results to see you hadn’t posted others.
It seems to me that there are two different heuristics here and it is worth separating them.
But first I should explain why I think my initial reading of this post suggests heuristics that I think are problematic. The mere existence of CBT does not seem like strong evidence for psychology. It is no more evidence for modern mainstream psychology than freudian psychoanalysis is evidence for freudian psychology. As I understand it, CBT is gaining market share against other forms of talk therapy, but largely because of academic authority, roughly the same way that the other therapies got established. I am a fan of CBT because its proponents claim to do experiments distinguishing its efficacy from that of other talk therapies and failing to distinguish other talk therapies from talking to untrained people (which is still useful). But why do I need CBT for that? I can check that mainstream psychologists are more enthusiastic about experiments than freudian ones without resorting to the particular case of CBT. Similarly, competing nutritional theories are successful in the marketplace, sold both by large organizations with advertising budgets (Weight Watchers vs Atkins) and personal trainers working by word of mouth. But I agree that they example of CBT sheds light on psychology.
One heuristic is that experiments with every-day comprehensible goals are more useful for evaluating a field than experiments of technical claims. Most obviously, it is easier to evaluate the value of the knowledge demonstrated by such experiments than technical knowledge. Knowing that statins lower cholesterol is only useful if I trust the medical consensus on cholesterol, but knowing that they lower all-cause mortality is inherently valuable (though if the population of the experiment was chosen using cholesterol, this is also evidence that the doctors are correct about cholesterol). Similarly, the efficacy of CBT shows that psychologists know useful things, and not just trivia about what people do in weird situations. Moreover, I suspect that such experiments are more reliable than technical experiments. In particular, I suspect that they are less vulnerable to publication bias and data-mining. Certainly, I have to learn about technical measures to determine how vulnerable technical experiments are to experimenter bias.
The other heuristic is that selling a theory to someone else is a good sign. Unfortunately, this seems to me of limited value because people buy a lot of nonsense, not just competing psychological and nutritional theories, but also horoscopes. How does the military differ from academic psychologists? I’m sure it hires a lot of them. They do much larger and longer experiments than academics. They do more comprehensive experiments, with better measures of success, analogous to the advantage of all-cause mortality over number of heart attacks (let alone cholesterol). They could eliminate publication bias because they know all the studies they’re doing, but only if the people in charge understand this issue; and there is still is some kind of bias in the kind of studies they let me read. These are all useful advantages, but in the end it does not look very different to me than the academic psychology we’re trying to evaluate. Similarly, industry consumes a lot of biological and chemical research, which is evidence that the research is, as a whole, real, but it fails to publish attempts to replicate, so the information is indirect. On the other hand, these industries, like the military, use the knowledge internally, which is better evidence than commercial CBT and nutrition, which try to sell the knowledge directly, and mainly demonstrate the value of academic credentials to selling knowledge.
Right, my examples were selected for a) presence of spinoffs, and b) evidence that the spinoffs were substantive. E.g. I excluded psychic hotlines and Freudian analysis.
I have been unable to find any practical spinoffs of gender studies.
That seems to me to be something which, if they can produce correct results, would be used to prevent things from going wrong in a public fashion, or by private consultants… somewhat like how you don’t expect much in the way of spinoffs from studies of criminal justice studies, except for specialists (i.e. lawyers).
I think it’s interesting topics for research papers, I’ve read something like that here: https://essays-service.com/blog/540-argumentative-essay-topics. It’s great that students conduct similar studies. There are currently no qualitative content that is interesting to learn.
I think it’s interesting topics for research papers, I’ve read something like that here: https://essays-service.com/blog/540-argumentative-essay-topics. It’s great that students conduct similar studies. There are currently no qualitative content that is interesting to learn.
There’s this great XKCD that totally makes this exact same point, except with more lolz.
If stock market economics worked, Noble prize winners would make money.
(This is slightly unfair, since the Black-Merton-Scholes theory does make other people money, to an extent. Additionally, while Merton and Scholes were on the board, LTCM was not strictly based on their theories. Still, surprised to see that this hasn’t been mentioned.)
Did you steal this from XKCD?
Related: the generative heuristic.
This doesn’t just assume that the effect is reproduceable, it assumes that the effect generalizes to things other than erotic images. Considering that erotic imagery gets special treatment in our brain’s processes that finance does not, this seems like a dubious assumption even given the premise that the effect is real.
No it doesn’t?
The idea is to (say) show an erotic image on the right if the stock goes up, and one on the left if the stock goes down. It’s still porn precognition, except the “randomness” source is the stock market rather than whatever they used in the original experiment.
Ah, you’re right, I misinterpreted that.
It does still assume though that the effect allows one to predict better than chance where the image will appear regardless of the process that determines the location.
Suppose humans had some sort of telepathy that allowed them to read the state of the computer on some subconscious level and thereby predict the location where the image would appear, if the location were determined by the computer that was displaying the images. Predicting corporate earning announcements, interest rate changes, etc. would be an entirely different matter.
Not if psi is capricious, and the evidence suggests it is. (I say this to emphasize that psi is singular in this respect; your heuristic might work for other fields.) (ETA: I guess macroeconomics has similar problems.)
(ETA2: Think about it from the simulation hypothesis perspective: you’re trying to manipulate the gods into doing something for you. You’re dealing with transhumanly intelligent agents. It’s likely not a good idea to try to be clever.
Aleister Crowley)
Name me some parapsychologists who believe that, preferably ones who score highly on your other quality measures. Bem and Broderick and Radin and Goertzel and such claim that psi stuff is replicable, and don’t claim that it would bend over backwards to avoid doing anything useful.
Evidence for the capriciousness of X is also evidence against X existing.
Too lazy. If you check out the references of these papers you might find various examples. I trust Kennedy and thus trust who he trusts.
By the way, have you tested your psi abilities? If so what were the results?
Why?
I have had no spooky experiences, and can’t predict RPS or dice better than chance over moderately-sized datasets. Have you had psi-experiences, or positive results in some kind of self-experiment?
He was involved in calling out some fraud going on where he worked, he’s honest about what motivated him to get involved in psi research (various personal experiences), he understands the statistics well enough to know the weaknesses of meta-analyses and the necessity of having powerful methods, he’s pointed out various methodological problems with psi research as it’s usually practiced, he doesn’t try to hide weird results or pretend that weird results are the ones that the experiment was intended to find, he recognizes that most claimed psi experiences can be explained away by purely mundane factors, with a few exceptions he’s very careful to pay attention to all reasonable hypotheses about possible mechanisms for psi given the limited and ambiguous evidence, et cetera.
Nor worse than chance, I presume? I’d figure you a goat after all.
I haven’t done any rigorous self-experimentation as I’m superstitious and am mildly freaked out about the idea that reality actively corroborates whatever inductive biases you happen to have. Rationality is hard enough in a non-agentic world. Truth would seem to be about having terms in your utility function pertaining to cooperation with other agents, so if the information I get doesn’t help me cooperate with others then I don’t see any grounds for me to trust it or for me to go out and find it. Yay anti-epistemology. That’s a rationalization; I’m not entirely sure why I’m afraid of rigorous self-tests.
You can have other Bay Area LessWrongers watch or help set up the experiments. That will at least help in cooperation with this community.
Good point, but to some extent that might defeat the purpose. Since my model is that psi is evasive I expect that the more people I clue in to the results or even the existence of the experiments, the less likely it is I’ll get significant or sensible results. And with the retrocausal effects demonstrated by PEAR and so on, if I ever intend to publicize the results in the future then that itself is enough to cause psi to get evasive. Kennedy actually recommends keeping self-experimentation to oneself and precommiting to telling no one about the results for these reasons. So basically even if you get incredibly strong results you’re left with a bunch of incommunicable evidence. Meh.
I have various responses ready for our other conversation by the way, which I’d like to get back to soon. I was finally able to get a solid twenty-two hours of sleep. My fluid intelligence basically stops existing when sleep-deprived.
This reminds me of the story of the poker player who concluded it was unlucky to track his winnings and losses because whenever he did it, he lost way more than he expected to.
http://lesswrong.com/lw/20y/rationality_quotes_april_2010/1ugy
Thanks for the link! (I think I saw it first in Rational Decisions, since I hadn’t upvoted that quote before.)
Seems plausible his observations were correct if he had a small sample size, if not his judgment about what to do given his observations. (I say this only because the default reaction of “what an impossibly idiotic person” might deserve a slight buffer when as casual readers we don’t know many actual details of the case in question. What evidence filtered/fictional evidence and what not.)
Sorry for butting in, but don’t you find it strangely convenient that your psi effect is defined just so as to move it outside the domain of scientific inquiry? Do you anticipate ever finding a way to reliably distinguish it from random chance, or do you anticipate forming another excuse, ahem, reason why you should have expected from the start that the way you just tried would not reliably show it? I’d claim you’re chasing invisible dragons, but I find it incredulous that you haven’t thought of the comparison yourself, which leaves me confused. How does an effect look like that is real but cannot be distinguished from random chance by any reliable method? How would you extract utility from such an effect? And is it worth it to break your tools of inquiry that otherwise work very well, just so you can end up believing in an effect that is true but useless? Food for thought.
I am aware of this. I would have to be incredibly stupid not to be aware of it.
I can reliably distinguish it from random chance, but by hypothesis I just can’t tell you about it. I can get evidence, just not communicable evidence.
I think maybe every time I post about evasive psi I should include a standard disclaimer along the lines of “Yes, I realize how incredibly dodgy this sounds and I also find it rather frustrating, but bringing it up and harping on it never leads anywhere.”
How about trying to leave a line of retreat and imagine what the world would be like if the theory Will is proposing were correct?
(E.g., imagine a transhumanly intelligent agent who only hangs out with you when it knows that no one will believe that it hung out with you. This means that when it hangs out with you it can do arbitrarily magical things, but you’ll never be able to tell anyone about it, because the agent went out of its way to keep that from happening, and it’s freakin’ transhumanly intelligent so you know that any apparent chance of convincing others of its visit is probably not actually a chance. Is this theory improbable? Absolutely. But supposing that the agent actually does hang out with you and does arbitrarily magical stuff, you don’t have any way of convincing others that the theory is a posterior probable, and you’ll probably just end up making a fool out of yourself if you try, as the agent predicted.
I think a problem might be when people think of psi they think ‘ability to shoot fireballs’ rather than ‘convincing superintelligences to act on your behalf’ (note that that’s just one possible mechanism of many and we shouldn’t privilege any hypotheses yet). If people thought they were dealing with intelligent agents then they’d use the parts of their brain designed for dealing with agents, and those parts are pretty good at what they do. Note we only want to use those parts because, at least in my opinion, psi as a relatively passive phenomenon seems to be a falsified hypothesis, or at the very least it doesn’t explain a ton of things that seem just as real as passive psi phenomena.)
Oh, you mean Bill Murray.
That’s my point, I don’t expect to be able to make consistently differing observations! If his theory is correct, we still wouldn’t be able to reliably exploit that feature.
I’m not saying it’s wrong, I’m saying even if it’s right it’s useless to believe.
I mean if there is some form of reliable Psi I’ll have a party because that’d be awesome.
I think you should look more closely at the arguments I made above: my hypothesis makes testable predictions, but if verified the evidence isn’t reliably communicable to other people. By my hypothesis psi is perhaps “exploitable” but I cringe at the thought of trying to “exploit” a little-understood agentic process in the case that it actually exists.
Why?
A safety heuristic. Just say no to demons, for the same reason you should say no to drugs until you figure out what they are, what they do, and the intentions of the agent offering them to you.
Does Kennedy recommend a specific type of self-experimentation? What’s the best way to test one’s psi-abilities in your opinion?
I don’t remember if he has any specific recommendations. I don’t know what the best way to test ones abilities would be but the REG (random event generator) paradigm seems highly conducive to rigorous and thorough experimentation. Alas, I forget what the literature says about pseudo-random generators. I can’t in good faith recommend psi experiments; on the one hand if psi is for real then we’re probably doing it all the time without realizing it (which is I think the typical Eastern perspective), on the other hand it seems like a generally bad idea to go out of ones way to play around with a little-understood perhaps-agentic process. Playing with Thor seems significantly dumber than playing with fire.
I must ask, since you are a known troll, do you really believe in psychic fucking powers, or are you just testing LW’s ability to distinguish sanity by your comments’ karma?
My understanding is that he thinks the LW consensus underestimates the likelihood of psychic powers.
I do believe something weird akin to “psychic fucking powers” is going on.
Normally it is clear when I am or am not trolling. The vast majority of my contributions to Less Wrong have positive karma for a reason.
Oh, glub.
Obviously not.
Karma doesn’t mean that.
Beg to differ with both.