I know we’ve talked about a very similar study before (it looks like this is a different group with a different set of monkeys), but as always: N! N! N!
I think the approach the authors take is basically worthless. You would have trouble detecting smoking (hazard rate 2 for humans) reliably with only 40 experimental subjects and 46 controls, and I haven’t gotten a good estimate on what the hazard rate for CR / IF should be. It’s almost definitely not .5, and I would be surprised if it were even as significant as, say, .8. The Bayesian thing to do would be to report “we think the hazard rate is 1.2, but our 5th percentile is X and 95th percentile is Y” (or, ideally, the whole likelihood function).
As well, we’re mostly looking at monkeys that have died at or before the median age. (Slightly less than half of the young monkeys are currently alive.) Supposing CR completely eliminated the risk of cancer, at the cost of increasing deaths due to accidents, what would the mortality curves look like? Early on, the CR monkeys would look worse, as they died of more accidents, until every control monkey was eaten by the Gompertz curve and the CR monkeys continued on as a pure exponential.
Things worth noting:
In the other study, none of the CR monkeys were even pre-diabetic, whereas diabetes was rampant among the controls. Here, 2 CR monkeys were diabetic. They note that this is interesting (read: odd).
0 of the CR monkeys have been diagnosed with cancer; 6 of the control monkeys have already died of it. (This is only p=0.028! N! N! N!)
Diet composition was significantly different. These monkeys were fed wheat, corn, and other things for protein; the other monkeys were just fed lactalbumin. These monkeys had a diet rich in anti-oxidants; the other monkeys might not have.
The other study had a diet with 29% sucrose; this study had a diet with 4% sucrose. Perhaps this handicapped the controls in the other diet.
Both monkeys got the same diet in the other study- which resulted in over-supplementation of the controls. Here, nutritional supplements were handled separately for each group.
The controls in the other study were true ad libitum (read: obese), the controls in this study were on a restricted (but less severely restricted) diet. The ad libitum in the other study probably handicapped their controls.
It looks like adolescence may be the best time to start CR, but judging that from N=40 is probably unwise. (It makes sense biologically, though- less need for an anti-cancer measure when you’re young, and starting when elderly might be too late.)
I haven’t seen much directly comparing CR and IF, and so I’m doing IF as it’s easier and likely roughly as good. I really want to see bigger studies on this, though, and ideally human studies (they’re unlikely to be done by the time I need them to be done, but we might as well find that knowledge out for our descendants!).
Also, an additional thing I forgot to post there- the degree of calorie restriction is meaningful, and while I saw the number for the other study, I didn’t see them quote their percentage in this study. Mice results have suggested that 10% CR is better than 30% or 40%, and so the percentage is meaningful. (The other study did 30%, IIRC.)
Reposting my comment from gwern’s google+ with one edit:
Also, an additional thing I forgot to post there- the degree of calorie restriction is meaningful, and while I saw the number for the other study, I didn’t see them quote their percentage in this study. Mice results have suggested that 10% CR is better than 30% or 40%, and so the percentage is meaningful. (The other study did 30%, IIRC.)