But almost all of IQ suppression happens in childhood and I don’t know of “single heath care interventions” which would raise the IQ of an adult from 80 to 120.
Anything that moves someone from level 10 pain to painfree has probably that effect. If I put a 120 IQ person on level 10 pain I doubt they will get over 80 points on an IQ test.
It is enormous and would completely dwarf the 20Bn price which is what, a rounding error in the US Federal Budget?
The mnemonsyth data is laying around for years without anyone analysing them to find more effective algorithm for human learning.
Paying some quant who actually knows something about statistical modelling to take on the task is relatively cheap. Probably less than 100,000$ for a result that matters significantly for improvement of general cognition.
That’s an example of a very obvious area to invest money if you care about cognitive enhancement. As a result I don’t see the world in a way where a lot of people are seriously trying to advance cognitive enhancement who are understanding the landscape well enough to direct funds to obvious areas. I think it’s even worse if you look at nonobvious but potentially good ideas that cost a bit of money.
I do approve of CFAR but we don’t live in a world where they have billions of dollars.
The mnemonsyth data is laying around for years without anyone analysing them to find more effective algorithm for human learning.
Personally, I don’t expect much from the data. From reading through scores of papers comparing minute differences in spacing and getting contradictory results and small improvements, I get the impression that once you’ve moved from massed to spacing (almost any kind of spacing), you’ve gotten the overwhelming majority of the benefits, and the rest is basically frippery which needs a lot of domain expertise to improve upon. I understand Peter hasn’t looked at the Mnemosyne data much either because it didn’t indicate to him that the fancier SuperMemo algorithms were much help.
But I could be wrong. I haven’t worked with the Mnemosyne data very much beyond looking at correlating scores with hour of day and day of week; I’ve been waiting for the data to import to SQL to work with the whole dataset.
(So far I’m up to 81%… I’m hopeful that the 1TB SSD I just ordered will help speed things up a lot, and then I can host the SQL on Amazon S3 or something for anyone who is interested; a quick estimate is that it’ll cost me ~$5-10 a month or $60-120 a year to host, but I figure that I can solicit some donations to help cover it. If nothing else, it’ll save Peter a lot of time and effort in uploading the raw logs for each person who asks him. EDIT: the SSD sped things up even more than I expected: processing time goes from months to ~25 hours. So I deleted it and am fetching a fresher dataset to process & distribute.)
Personally, I don’t expect much from the data. From reading through scores of papers comparing minute differences in spacing and getting contradictory results and small improvements, I get the impression that once you’ve moved from massed to spacing (almost any kind of spacing), you’ve gotten the overwhelming majority of the benefits, and the rest is basically frippery which needs a lot of domain expertise to improve upon. I understand Peter hasn’t looked at the Mnemosyne data much either because it didn’t indicate to him that the fancier SuperMemo algorithms were much help.
What do you think are the prospects of a SRS that uses a forgetting curve specific to the individual, by relying on past performance? Has this been tried or considered?
You can already modify the forgetting curve yourself in most SRS based on your needs via a constant. Unless an automatic algorithm goes with the personal best past performance, I expect a continuous decay of performance using such an algorithm for most individuals. I think Anki already automatically modifies intervals of individual cards based on your past performance i.e. the experienced difficulty and instances of forgetting, for example. New cards are not affected by past performance, as far as I know.
You need to specify which parts are being modified by an SRS system: each card has an easiness parameter and that will be continuously modified based your performance, but I don’t think existing SRS systems like Anki or Mnemosyne will modify other parts of the curve like the exponent. For example, SM2′s algorithm runs in part based on updating the easiness as EF+(0.1-(5-q)*(0.08+(5-q)*0.02)) - the EF will be progressively updated, but the formula itself never changes even if 0.1 is not ideal and 0.15 would be better or something.
Both Peter and Damien think that the further SuperMemo algorithms provide no benefit.
As far as I know they make they don’t make that judgement because of data but because they have a feeling the the algorithm isn’t better.
Piotr Wozniak who actually did run the data claims:
Below you will find a general outline of the seventh major formulation of the repetition spacing algorithm used in SuperMemo. It is referred to as Algorithm SM-11 since it was first implemented in SuperMemo 11.0 (SuperMemo 2002). Although the increase in complexity of Algorithm SM-11 as compared with its predecessors (e.g. Algorithm SM-6) is incomparably greater than the expected benefit for the user, there is a substantial theoretical and practical evidence that the increase in the speed of learning resulting from the upgrade may fall into the range from 30 to 50%.
I don’t think that’s it’s certain that Piotr is right. On the other hand if he’s right that’s on a scale that matters a great deal.
If you are better at estimating when a card will be forgotten you are also nearer at the point where you do deliberate practice that might make you better at learning.
The second issue is daily memory performance variation. I’m not sure but I think there might be days when the brain doesn’t work well at storing memories. If you answer 200 cards on such a day and they get sheduled into the future and you get 20 of the first 30 cards wrong when they get tested again it would make sense to reshedule the rest of the 170 cards to a time closer to the present.
We do have practical issues that the present algorithm doesn’t handle well. You can’t tell the present algorithm that you want to really know all the facts in a deck at a particular date when you write an exam.
Having a stable mathematical theory that can predict when a card would be forgotten can help towards that end.
You might also think about the kind of tools that psychologists use to measure a trait like unconscious racism in the present. Words or images get flashed for short time durations. You might measure unconscious racism the same way through testing people long-term memory for the ability to remember related information.
If you both have the tool of flashing images and the tool that measures the effect of unconscious racism on long term memory you can start asking questions such as: “Which unconscious racism metric changes first and which lags behind?”
The Mnemonsyth data doesn’t allow us to answer that question but it can provide a foundation on which the mathematical theories for long-term memory can build that help you to run that experiment.
It can be the basis for learning stuff about the way the human mind works that you can’t get by gathering 50 participants and putting them into an fMRI while you ask questions.
Scientific progress often comes from progress in underlying tools and frameworks.
Piotr Wozniak who actually did run the data claims:
I’m not sure what data he has run; skimming that page doesn’t help much. I know he has no dataset comparable to the Mnemosyne dataset because I sent him my initial results a few months ago and he told me so, so it can’t be based on that.
At the present time he has Supermemo Online and that should provide an interesting data set. But I don’t think he had that dataset at the time he wrote those lines.
I think Piotr worked a lot with his own data. But he also writes:
The increase in the speed of the convergence was achieved by employing actual approximation data obtained from students who used SuperMemo 6 and/or SuperMemo 7
Algorithm SM-8 is constantly being perfected in successive releases of SuperMemo, esp. to account for newly collected repetition data, convergence data, input parameters, etc.
He also described it in his thesis in a bit of detail.
32 test subjects does not compare to the Mnemosyne dataset but it does provide plenty of data for testing algorithms and the might be enough data to decide that SM-8 is significantly better than SM-2.
Oh please. And how is that relevant to this discussion?
The fact that you haven”t thought of trivial examples that show that the theory in which you believe is wrong illustrates an error in the underlying model of what IQ happens to be.
There are huge gains through physical changes and removing obstacles that keep mental functioning down.
I think you’re confused between IQ, learning, and memorization. These three are all different things.
No, I’m not. I don’t expect a person who’s bad at allocating resources to advance memorization to be good at allocating resources about IQ.
Secondly, if you want to improve IQ than it’s useful to have a good model of how human cognition works. Having people with good mathematical skills analyse the massive pile of data behind spaced repetition learning not only provides us with the practical benefit of having better spaced repetition, it also tells us something about human cognition.
If you want at the way of inefficiently smart programmers use their nervous system, look at those programs who have back pain because they tense up the wrong muscles at the wrong time. From a big picture way it’s incredibly stupid to tense up muscles in a way that makes your back hurt.
But it happens because those smart people have nearly no awareness or control of what their nervous system is doing.
I see more and more examples of people behavior very far away from optimum because of lack of knowledge/skills.
Anything that moves someone from level 10 pain to painfree has probably that effect. If I put a 120 IQ person on level 10 pain I doubt they will get over 80 points on an IQ test.
Is there anything remotely like that going on in a non-trivial fraction of the population? The closest I can think of is sleep deprivation, but I’d be very surprised if the effect of non-extreme cases of it is more than 10 IQ points, let alone 40.
Purely from introspection, I would bet that sleep deprivation costs me less than 10 points of IQ-test performance but the equivalent of much more than 10 IQ points on actual effectiveness in getting anything done.
My introspection had similar results about sleep deprivation and mental performance before I tried Anki. Now that I’ve actually measured my performance with the software, I know it can be as low as 50 % of my peak performance measured in latency of recall when even slightly (2-3 hours) sleep deprived. Of course, Anki measures memory, not IQ.
This experience made me update significantly in the direction that my introspection, at least in the case of mental performance, sucks. My social life has suffered as a result of this realization.
Of course I’m speaking with hindsight here, but it doesn’t seem at all surprising that you could be as much as 2x slower at some mental tasks when sleep-deprived. I’d expect that to translate to less than a 10-point loss in IQ-like tests—maybe that’s unrealistic?
I keep a pretty strict sleep schedule and drink very little alcohol. Almost any nightly socializing is gone, and there used to be a lot of that. I see this as a net positive though.
You’ve doubtless thought about this already, but I’ll say it anyway just in case: Your happiness and net effectiveness (at whatever you seek to do) may be affected as much by your network of friends and associates as by your own mental performance. Trading in social life for increased sharpness may be far from an unambiguous win.
My scores at stuff like Lumosity and Quantified Mind tend to always be great in the morning (a couple hours after I wake up), but unless I slept enough the previous night they suck balls in the early afternoon; they get better again in the evening if I take a nap in the afternoon. My self-perceived wakefulness and willpower levels vary similarly, but several hours earlier (i.e. I underestimate my mental performance right before lunch and overestimate it right after my afternoon nap).
Sort of. The data isn’t open. There are studies that they published based on the data but it’s hard to know how much they cherry picked the studies they decided to publish.
There a huge commercial incentive to make Lumosity training appear better than it actually is.
Spaced repetition system data has other advantages. I don’t really care about whether I get better at the task of completing a random Lumosity game.
On the other hand getting better at remembering any fact that can be displayed via Mnemonsyth or Anki is a valuable. I would also think that there the performance at Anki correlates with other learning tasks.
SRS data provides you a variable that tells you how good you are at a given information at saving information to your long term memory and it gives you information about how good you are at accessing information from the long term memory.
To recap what we mean when we say IQ, IQ is about how your g-value is relative to other members in your population. What’s that g-value?
If you take a bunch of different cognitive tests from different cognitive domains that don’t depend on “knowledge” and run them through principal component analysis, the first factor that you get is g.
The problem with a regular IQ test is that it takes an hour to complete and that hour doesn’t provide additional benefits.
If I spend an hour a day with Anki, I do don’t do it to determine my cognitive performance but I do it to learn. That means there a possibility of getting a good cognitive score for free.
Another problem with regular IQ tests is that you can train to get better at a particular IQ test. You score better at the task but the test focuses on specific skills that don’t generalize to other skills.
Having the large pill of SRS data should allow us to correct for the training effect if we want to do so. We might also find that we don’t even want to correct for the effect and the effect generalizes to other domains. SRS has the advantages that we do always get new cards with new information that we want to learn and that diversity might increase the generalizability of SRS training effects.
At the moment I do have an Anki deck which the point of learning the sounds of the IPA. I have a Anki deck with I use to learn to distinguish colors better. I have a deck for french vocabulary. I have decks for biochemistry.
Should Anki usage provide training effects in all those domains, I think there a good chance that this will also increase g.
Anything that moves someone from level 10 pain to painfree has probably that effect. If I put a 120 IQ person on level 10 pain I doubt they will get over 80 points on an IQ test.
The mnemonsyth data is laying around for years without anyone analysing them to find more effective algorithm for human learning.
Paying some quant who actually knows something about statistical modelling to take on the task is relatively cheap. Probably less than 100,000$ for a result that matters significantly for improvement of general cognition.
That’s an example of a very obvious area to invest money if you care about cognitive enhancement. As a result I don’t see the world in a way where a lot of people are seriously trying to advance cognitive enhancement who are understanding the landscape well enough to direct funds to obvious areas. I think it’s even worse if you look at nonobvious but potentially good ideas that cost a bit of money.
I do approve of CFAR but we don’t live in a world where they have billions of dollars.
Personally, I don’t expect much from the data. From reading through scores of papers comparing minute differences in spacing and getting contradictory results and small improvements, I get the impression that once you’ve moved from massed to spacing (almost any kind of spacing), you’ve gotten the overwhelming majority of the benefits, and the rest is basically frippery which needs a lot of domain expertise to improve upon. I understand Peter hasn’t looked at the Mnemosyne data much either because it didn’t indicate to him that the fancier SuperMemo algorithms were much help.
But I could be wrong. I haven’t worked with the Mnemosyne data very much beyond looking at correlating scores with hour of day and day of week; I’ve been waiting for the data to import to SQL to work with the whole dataset.
(So far I’m up to 81%… I’m hopeful that the 1TB SSD I just ordered will help speed things up a lot, and then I can host the SQL on Amazon S3 or something for anyone who is interested; a quick estimate is that it’ll cost me ~$5-10 a month or $60-120 a year to host, but I figure that I can solicit some donations to help cover it. If nothing else, it’ll save Peter a lot of time and effort in uploading the raw logs for each person who asks him. EDIT: the SSD sped things up even more than I expected: processing time goes from months to ~25 hours. So I deleted it and am fetching a fresher dataset to process & distribute.)
What do you think are the prospects of a SRS that uses a forgetting curve specific to the individual, by relying on past performance? Has this been tried or considered?
You can already modify the forgetting curve yourself in most SRS based on your needs via a constant. Unless an automatic algorithm goes with the personal best past performance, I expect a continuous decay of performance using such an algorithm for most individuals. I think Anki already automatically modifies intervals of individual cards based on your past performance i.e. the experienced difficulty and instances of forgetting, for example. New cards are not affected by past performance, as far as I know.
You need to specify which parts are being modified by an SRS system: each card has an easiness parameter and that will be continuously modified based your performance, but I don’t think existing SRS systems like Anki or Mnemosyne will modify other parts of the curve like the exponent. For example, SM2′s algorithm runs in part based on updating the easiness as
EF+(0.1-(5-q)*(0.08+(5-q)*0.02))
- the EF will be progressively updated, but the formula itself never changes even if 0.1 is not ideal and 0.15 would be better or something.Both Peter and Damien think that the further SuperMemo algorithms provide no benefit.
As far as I know they make they don’t make that judgement because of data but because they have a feeling the the algorithm isn’t better.
Piotr Wozniak who actually did run the data claims:
I don’t think that’s it’s certain that Piotr is right. On the other hand if he’s right that’s on a scale that matters a great deal.
If you are better at estimating when a card will be forgotten you are also nearer at the point where you do deliberate practice that might make you better at learning.
The second issue is daily memory performance variation. I’m not sure but I think there might be days when the brain doesn’t work well at storing memories. If you answer 200 cards on such a day and they get sheduled into the future and you get 20 of the first 30 cards wrong when they get tested again it would make sense to reshedule the rest of the 170 cards to a time closer to the present.
We do have practical issues that the present algorithm doesn’t handle well. You can’t tell the present algorithm that you want to really know all the facts in a deck at a particular date when you write an exam. Having a stable mathematical theory that can predict when a card would be forgotten can help towards that end.
You might also think about the kind of tools that psychologists use to measure a trait like unconscious racism in the present. Words or images get flashed for short time durations. You might measure unconscious racism the same way through testing people long-term memory for the ability to remember related information.
If you both have the tool of flashing images and the tool that measures the effect of unconscious racism on long term memory you can start asking questions such as: “Which unconscious racism metric changes first and which lags behind?”
The Mnemonsyth data doesn’t allow us to answer that question but it can provide a foundation on which the mathematical theories for long-term memory can build that help you to run that experiment.
It can be the basis for learning stuff about the way the human mind works that you can’t get by gathering 50 participants and putting them into an fMRI while you ask questions.
Scientific progress often comes from progress in underlying tools and frameworks.
I’m not sure what data he has run; skimming that page doesn’t help much. I know he has no dataset comparable to the Mnemosyne dataset because I sent him my initial results a few months ago and he told me so, so it can’t be based on that.
At the present time he has Supermemo Online and that should provide an interesting data set. But I don’t think he had that dataset at the time he wrote those lines.
I think Piotr worked a lot with his own data. But he also writes:
He also described it in his thesis in a bit of detail. 32 test subjects does not compare to the Mnemosyne dataset but it does provide plenty of data for testing algorithms and the might be enough data to decide that SM-8 is significantly better than SM-2.
Oh please. And how is that relevant to this discussion?
I think you’re confused between IQ, learning, and memorization. These three are all different things.
The fact that you haven”t thought of trivial examples that show that the theory in which you believe is wrong illustrates an error in the underlying model of what IQ happens to be. There are huge gains through physical changes and removing obstacles that keep mental functioning down.
No, I’m not. I don’t expect a person who’s bad at allocating resources to advance memorization to be good at allocating resources about IQ.
Secondly, if you want to improve IQ than it’s useful to have a good model of how human cognition works. Having people with good mathematical skills analyse the massive pile of data behind spaced repetition learning not only provides us with the practical benefit of having better spaced repetition, it also tells us something about human cognition.
If you want at the way of inefficiently smart programmers use their nervous system, look at those programs who have back pain because they tense up the wrong muscles at the wrong time. From a big picture way it’s incredibly stupid to tense up muscles in a way that makes your back hurt.
But it happens because those smart people have nearly no awareness or control of what their nervous system is doing.
I see more and more examples of people behavior very far away from optimum because of lack of knowledge/skills.
Is there anything remotely like that going on in a non-trivial fraction of the population? The closest I can think of is sleep deprivation, but I’d be very surprised if the effect of non-extreme cases of it is more than 10 IQ points, let alone 40.
Purely from introspection, I would bet that sleep deprivation costs me less than 10 points of IQ-test performance but the equivalent of much more than 10 IQ points on actual effectiveness in getting anything done.
My introspection had similar results about sleep deprivation and mental performance before I tried Anki. Now that I’ve actually measured my performance with the software, I know it can be as low as 50 % of my peak performance measured in latency of recall when even slightly (2-3 hours) sleep deprived. Of course, Anki measures memory, not IQ.
This experience made me update significantly in the direction that my introspection, at least in the case of mental performance, sucks. My social life has suffered as a result of this realization.
Of course I’m speaking with hindsight here, but it doesn’t seem at all surprising that you could be as much as 2x slower at some mental tasks when sleep-deprived. I’d expect that to translate to less than a 10-point loss in IQ-like tests—maybe that’s unrealistic?
How has your social life suffered?
I keep a pretty strict sleep schedule and drink very little alcohol. Almost any nightly socializing is gone, and there used to be a lot of that. I see this as a net positive though.
Ah, I understand. Thanks.
You’ve doubtless thought about this already, but I’ll say it anyway just in case: Your happiness and net effectiveness (at whatever you seek to do) may be affected as much by your network of friends and associates as by your own mental performance. Trading in social life for increased sharpness may be far from an unambiguous win.
My scores at stuff like Lumosity and Quantified Mind tend to always be great in the morning (a couple hours after I wake up), but unless I slept enough the previous night they suck balls in the early afternoon; they get better again in the evening if I take a nap in the afternoon. My self-perceived wakefulness and willpower levels vary similarly, but several hours earlier (i.e. I underestimate my mental performance right before lunch and overestimate it right after my afternoon nap).
Stanley Coren put some numbers on the effect of sleep deprivation upon IQ test scores.
There’s a more detailed meta-analysis of multiple studies, splitting it by types of mental attribute, here:
A Meta-Analysis of the Impact of Short-Term Sleep Deprivation on Cognitive Variables, by Lim and Dinges
I’ve finally managed to upload the data; see https://groups.google.com/d/msg/mnemosyne-proj-users/tPHlkTFVX_4/oF61BF44iQkJ
OTOH Lumosity data has been being studied.
Sort of. The data isn’t open. There are studies that they published based on the data but it’s hard to know how much they cherry picked the studies they decided to publish.
There a huge commercial incentive to make Lumosity training appear better than it actually is.
Spaced repetition system data has other advantages. I don’t really care about whether I get better at the task of completing a random Lumosity game. On the other hand getting better at remembering any fact that can be displayed via Mnemonsyth or Anki is a valuable. I would also think that there the performance at Anki correlates with other learning tasks.
SRS data provides you a variable that tells you how good you are at a given information at saving information to your long term memory and it gives you information about how good you are at accessing information from the long term memory.
To recap what we mean when we say IQ, IQ is about how your g-value is relative to other members in your population. What’s that g-value? If you take a bunch of different cognitive tests from different cognitive domains that don’t depend on “knowledge” and run them through principal component analysis, the first factor that you get is g.
The problem with a regular IQ test is that it takes an hour to complete and that hour doesn’t provide additional benefits. If I spend an hour a day with Anki, I do don’t do it to determine my cognitive performance but I do it to learn. That means there a possibility of getting a good cognitive score for free.
Another problem with regular IQ tests is that you can train to get better at a particular IQ test. You score better at the task but the test focuses on specific skills that don’t generalize to other skills.
Having the large pill of SRS data should allow us to correct for the training effect if we want to do so. We might also find that we don’t even want to correct for the effect and the effect generalizes to other domains. SRS has the advantages that we do always get new cards with new information that we want to learn and that diversity might increase the generalizability of SRS training effects.
At the moment I do have an Anki deck which the point of learning the sounds of the IPA. I have a Anki deck with I use to learn to distinguish colors better. I have a deck for french vocabulary. I have decks for biochemistry.
Should Anki usage provide training effects in all those domains, I think there a good chance that this will also increase g.