I’m not sure what the opinions of folks at SIAI have to do with this (just mentioning them doesn’t constitute an valid argument, I’m not at SIAI, I was speaking seriously), but I can recall a quote from Eliezer expressing a sentiment that’s pretty close to my second one:
“What happened to me personally is only anecdotal evidence,” Harry explained to her. “It doesn’t carry the same weight as a replicated, peer-reviewed journal article about a controlled study with random assignment, many subjects, large effect sizes and strong statistical significance.”
But that’s not very relevant. What’s relevant is this: if I believed in the efficacy ot the DNB task, it would be wrong for me to change my opinion substantially after reading your comment on LW. 13 people, 1 week, methodology and results unpublished and unverified?
The opinion of folks at Singularity Institute is VERY relevant when discussing the methodology of studies done at Singularity Institute. They laughed at the thought of going through the motions of collecting the extra evidence and doing the extra rituals with statistics to impress journal editors. They did the Bayesian version instead.
I actually just found this study in the past hour. Their conclusion: Brain Training software doesn’t work. But you get better at the games!
They cover Lumosity too which is funny because I was just looking into them myself. I was a bit concerned when I tried to look up the evidence for how “scientifically proven” Lumosity was (since they claim it ALL OVER THEIR SITE) and I later realized that the extent of their published findings were 2 conference posters, that aren’t available anywhere online, and that were accepted to conferences I’ve never even heard of.
I think I’m gonna go with the researchers from Nature and Singularity Institute on this one.
If you want an enormous, controlled, statistically-significant study that’s been published in a high-quality, peer reviewed journal, check out this study of brain training software in Nature.
The finding is surprising.
The training experience that I have most appreciated is that of pushing my brain towards the state of flow, releasing any stress or rumination and constantly letting go of the attachment to the frustration of failure while also not being frustrated by the fact that I may be frustrated about failure. This has a strong overlap with the process involved in some forms of meditation and is certainly the kind of thing that I would expect to have generalized benefit—albeit not necessarily to one of an improvement on tests of general intelligence. The format of game with a score to be maximised invokes my rather strong competitive instincts and so rather more motivating than the abstract thought “I should do meditation because meditation is good for me’.
The abstract is insufficiently concrete and vague for me to tell whether their studies relate to directly to the kind of training that I am interesting. I would expect not—since my interest is in things that are rather hard to test! My curiosity is not quite sufficient for me to bipass the feeling of disgust and frustration at the paywall and round up the rest of the document
I’m not sure what the opinions of folks at SIAI have to do with this (just mentioning them doesn’t constitute an valid argument, I’m not at SIAI, I was speaking seriously), but I can recall a quote from Eliezer expressing a sentiment that’s pretty close to my second one:
-- HP:MOR, Ch.6
But that’s not very relevant. What’s relevant is this: if I believed in the efficacy ot the DNB task, it would be wrong for me to change my opinion substantially after reading your comment on LW. 13 people, 1 week, methodology and results unpublished and unverified?
The opinion of folks at Singularity Institute is VERY relevant when discussing the methodology of studies done at Singularity Institute. They laughed at the thought of going through the motions of collecting the extra evidence and doing the extra rituals with statistics to impress journal editors. They did the Bayesian version instead.
If you want an enormous, controlled, statistically-significant study that’s been published in a high-quality, peer reviewed journal, check out this study of brain training software in Nature.
I actually just found this study in the past hour. Their conclusion: Brain Training software doesn’t work. But you get better at the games!
They cover Lumosity too which is funny because I was just looking into them myself. I was a bit concerned when I tried to look up the evidence for how “scientifically proven” Lumosity was (since they claim it ALL OVER THEIR SITE) and I later realized that the extent of their published findings were 2 conference posters, that aren’t available anywhere online, and that were accepted to conferences I’ve never even heard of.
I think I’m gonna go with the researchers from Nature and Singularity Institute on this one.
This seems to miss the point cousin_it was making.
The finding is surprising.
The training experience that I have most appreciated is that of pushing my brain towards the state of flow, releasing any stress or rumination and constantly letting go of the attachment to the frustration of failure while also not being frustrated by the fact that I may be frustrated about failure. This has a strong overlap with the process involved in some forms of meditation and is certainly the kind of thing that I would expect to have generalized benefit—albeit not necessarily to one of an improvement on tests of general intelligence. The format of game with a score to be maximised invokes my rather strong competitive instincts and so rather more motivating than the abstract thought “I should do meditation because meditation is good for me’.
The abstract is insufficiently concrete and vague for me to tell whether their studies relate to directly to the kind of training that I am interesting. I would expect not—since my interest is in things that are rather hard to test! My curiosity is not quite sufficient for me to bipass the feeling of disgust and frustration at the paywall and round up the rest of the document