I find it somewhat difficult to accept the idea that you would believe this text is an argument against the notion that anti-agapics is a field that is getting mainstream attention and has possible successful routes to that effect… especially since you yourself mentioned the most promising example from it. One that is in current research and has never been applied to people
Not my point. I’m not arguing that there isn’t mainstream attention. My point is that there has been mainstream attention before now and that that hasn’t gotten very far. So the outside view is something to the effect that every few decades scientists become much more interested in life-extension, it doesn’t go very far, and then they go do other things.
Could you expand on what these premises are? I’d be interested in seeing this chain of logic stated explicitly.
1) That humans are material organisms.
2) That organisms follow the laws of biology.
3) That there is no supernatural force dictating human events.
Etc., etc.. That’s really the only way I could see human lifespan extension becoming viable within fifty years, to the point that at least another fifty years’ worth of lifespan extension would be available to persons in my then-state.
If all your premises are essentially timeless then one needs to ask if one would have expected this to happen in the past. If for example in 1950, 1900 or 1850, scientists decided to put in a maximal amount of effort into extending human lifespan, do you think they would be as likely to be successful as you think scientists now would be? Maybe you can make that argument for 1950, but I’d be surprised if you’d make that claim about 1900 or 1850. This means that your statement has to include at least one premise involving the level of current medical and scientific knowledge that didn’t apply then.
Edited to add:
Caloric restriction and variations thereof seems in most species to increase the average lifespan but not increase the maximal lifespan.
Umm… I’m not familiar with that claim, and it contradicts evidence I have seen that indicates exactly the opposite.
Yeah, I seem to be wrong here. According to this survey for rodents there’s more increase on the average than the maximal but there’s a large increase in the maximal age as well.
My point is that there has been mainstream attention before now and that that hasn’t gotten very far.
It couldn’t go very far, technology wasn’t ready. It probably still isn’t ready, but until that’s the case, we wouldn’t expect any progress anyway, so its absence says little.
It couldn’t go very far, technology wasn’t ready. It probably still isn’t ready, but until that’s the case, we wouldn’t expect any progress anyway, so its absence says little
Right. I completely agree with that. The issue is then how do we know that things are so much more advanced now that Logos can say that he “cannot conceive of a worldline resultant from the here-and-now that would not include the datapoint that within fifty years from today, antiagapics research had extended human lifespans by at least another fifty years.” I’m not arguing that his projection doesn’t look likely from what we know. But the estimate of it being so likely as for alternatives to be inconceivable seems indicate an extreme and unjustified level of confidence.
My point is that there has been mainstream attention before now and that that hasn’t gotten very far. So the outside view is something to the effect that every few decades scientists become much more interested in life-extension, it doesn’t go very far, and then they go do other things.
That’s not even remotely valid. The interest in actual, honest anti-agapic research ongoing now (and for the last five or six years which in terms of medical research is “now”) has never before occurred. It is exactly a new thing.
It certainly has never had widespread mainstream acceptance within the medical research community. (And by the way; inducing clinical immortality in lineages of cells has been going on for several decades. It’s nothing new, and it is not related to antiagapics research in the slightest. If it were, you would have been in the rights to bring up the story of “The Immortal Woman” Henrietta Lacks. Keeping lineages of cells alive is easy. Preventing aging in multicellular organisms is not. They also aren’t related fields of endeavor.
Understanding that keeping cell cultures alive doesn’t have anything to do with preventing aging is a modern understanding that arose after scientists tried to see if they were connected and failed. Contemporary scientists of Carrel for example certainly thought they were related. It is only in hindsight that we’ve realized that that wasn’t the case. The discovery of the Hayflick limit was a major step in realizing that. In fact, that discovery killed off further anti-aging research in the 1960s and it took a while for it to recover.
That’s not even remotely valid. The interest in actual, honest anti-agapic research ongoing now (and for the last five or six years which in terms of medical research is “now”) has never before occurred. It is exactly a new thing.
We seem to have an actual factual dispute here. Or it may be an issue of what you mean by “actual” and “honest”. But for example, work in trying to extend telomeres dates from the 1990s and the basic idea about telomeres dates from the 1970s. I don’t know if this work gets classified if as “actual, honest” by you since it turned out to a large extent to not work as well as people thought it might. Also, in regards to the issue of premises, are you asserting that if scientists in 1900 or 1950 had put in current levels of attention into anti-aging that they would have succeeded at the level you estimate? I suspect not, which means that there has to be some other detail here that’s relevant concerning our current knowledge level. There’s an unstated premise about how much medical/biochem knowledge we have now.
I’ll note incidentally that the argument you are making now is substantially weaker than the argument you made earlier in the thread where you said:
But up until two or so years ago no one in any mainstream capacity was doing any antiagapic research at all.
which apparently has now become last five or six years. Does this represent an update of your views on the timeline?
But up until two or so years ago no one in any mainstream capacity was doing any antiagapic research at all.
which apparently has now become last five or six years. Does this represent an update of your views on the timeline?
The topic in question is fuzzy/indeterminite. The transition to widespread acceptance from marginalization was not instantaneous. The work currently ongoing began sometime after Resveratrol’s antiagapic effects were discovered.
There were also minor items of research over the years which were conducted—but senescence was, definitively, seen by the mainstream medical community as something to which only palliative care was worthy of researching, as opposed to senescence itself being a treatable condition.
But for example, work in trying to extend telomeres dates from the 1990s and the basic idea about telomeres dates from the 1970s. I don’t know if this work gets classified if as “actual, honest” by you since it turned out to a large extent not to work as well as people thought it might.
No, its failure to that end has little to nothing to do with the topic at hand. It doesn’t apply for the simple reason that the research in question was not done for the purpose of combating senescence. It was never meant as anything other than a tool for theoretical understanding of the role of telomeres in biology—a topic we still understand only very poorly, with many conflicting ideas about the specific functions and purposes therein.
Also, in regards to the issue of premises, are you asserting that if scientists in 1900 or 1950 had put in current levels of attention into anti-aging that they would have succeeded at the level you estimate?
No. I have no such claim to make. They might have discovered something like resveratrol and ways to make it available to the common public—that is, pharmaceutically mimic the caloric restriction effect—and it’s possible that if we extend this counterfactual scenario to the 80s’/90′s, it’s also possible that some other genetic therapies might have been uncovered/disseminated by now. (For example; work is currently ongoing at the SENS group to determine if senescent white blood cells can be screened from the body in a process similar to dialysis, which would allow the ‘more vital’ cells to repopulate within the patient, thereby restoring a more vital immune system, prolonging disease resistance in the elderly. This is also something that conceptually could have been in widespread application as early as the late 70′s.)
There’s an unstated premise about how much medical/biochem knowledge we have now.
No, there really isn’t. It’s not about that. It’s about topical focus. It’s about old age itself being seen as a topic worthy of being disassembled and its mechanisms researched in order to find solutions to that effect. The failure for this to have occurred until now is not representative of any promise of immediate success upon the conceptual breakthrough but rather a guarantee of the absence of success in the absence of the concept.
I’m reminded of a science fiction short story that delineates this same idea, in a way, though obviously with more “fantastic” results: Turtledove’s “The Road Not Taken”.
The topic in question is fuzzy/indeterminite. The transition to widespread acceptance from marginalization was not instantaneous. The work currently ongoing began sometime after Resveratrol’s antiagapic effects were discovered.
I don’t get that impression from my (limited) knowledge of the field. If Stipp’s book is accurate then there have been many attempts in the last hundred years by serious scientists.
Also, in regards to the issue of premises, are you asserting that if scientists in 1900 or 1950 had put in current levels of attention into anti-aging that they would have succeeded at the level you estimate?
No. I have no such claim to make.
Ok. Based on that and your other remarks then you have to be working off of not just your generic, timeless premises:
1) That humans are material organisms. 2) That organisms follow the laws of biology. 3) That there is no supernatural force dictating human events. Etc.,
but also specific estimates for the current technological level and general knowledge level. So the question becomes then how confident are you in those estimates? Pedictions of sweeping change are usually wrong. Such changes do sometimes occur from technologies, but most of those are things like ubiquitous cell phones or the internet or GPS which are not heavily anticipated or are anticipated in only a rough fashion. The outside view suggests that predictions about major technologies should have their confidences reduced.
There’s an unstated premise about how much medical/biochem knowledge we have now.
No, there really isn’t. It’s not about that. It’s about topical focus. It’s about old age itself being seen as a topic worthy of being disassembled and its mechanisms researched in order to find solutions to that effect.
This can’t be all that matters if the same result would not have occurred in 1950 or 1900 (or even more to the point 1850). That means that at least one premise needs to be not just about topical focus but about the overarching technological level. To use a weak analogy, post 1945 or so it woud be fair to say that the only issue preventing a major power from getting to the moon was one of topical focus. But that’s because one is taking into account the general tec level. I couldn’t make that claim in 1900 or 1850. And in defending that difference, I have to be able to point to specific technologies that they had in 1945 that they didn’t in 1900.
It seems to me that you are arguing in a way that is useful to convince someone who assigns a very low probability that lifespan extension will occur that they should assign a high probability. That’s not a point I disagree with. The issue is your claim that:
I cannot conceive of a worldline resultant from the here-and-now that would not include the datapoint that within fifty years from today, antiagapics research had extended human lifespans by at least another fifty years.
This is in context a bit stronger than this because you seem to be claiming not that people born fifty years from now will have a lifespan extension of another fifty years but that this will apply to people who are already old. This is a difficult position since with the exceptions of prosthetics and tissue-cloning, most of the plausible methods of life extension (such as reservatrol) rely on starting the process when people are young. Even if we have some form of reservatrol-like compound in fifty years that slows aging down to a crawl, someone who is a hundred years old at that point will still have the cells of a centenarian (assuming minimal prior anti-aging treatments).
Moreover, one can easily conceive of circumstances where many of the proposed anti-aging systems just won’t work. For example, humans have a much longer average life-span than most mammals. It could turn out that humans already do something to their cells that mimics most of the effects of reservatrol.
There’s also a general evolutionary issue at work here: If a species has a maximal lifespan of around L, then if there are any ways to get more reproductive fitness at a younger age that create problems sometime around or after L. So, one should expect not just one thing to go wrong as a species ages, but lots of things to go wrong. In fact, this is close to what one sees. If this problem is severe enough, it is possible that there are diseases which will show up in the very elderly that we haven’t even noticed yet because the population of such people is just too small.
To further this discussion, over at Prediction Book I’ve made a series of predictions about age extension related predictions. I’m curious how much and where your estimates disagree with mine. (I’m not including separate links to each, but the ones on that list are the reservatrol predictions and the sets about oldest living people).
I don’t get that impression from my (limited) knowledge of the field. If Stipp’s book is accurate then there have been many attempts in the last hundred years by serious scientists.
Then you should re-read that book—it was espousing exactly my own position in this dialogue!
I never said nobody before had ever tried to do anything that would potentially impact human longevity scientifically.
I did say that antiagapics research as a primary goal has never before this generation existed within the mainstream medical community.
Point #2 was the entire purpose of the book you’re trying to cite as a contradiction of my position.
The overwhelming majority of work done previously was either not directly intended to the effect of preventing aging in humans or else was done by ‘fringe’ actors without generalized support from the general consensus as to their topics being worthwhile endeavors before their results came in. (I add this qualifier to make it clear I’m not talking about after-the-fact ‘well, clearly that was foolish. It didn’t work’ stuff.)
This can’t be all that matters if the same result would not have occurred in 1950 or 1900 (or even more to the point 1850). That means that at least one premise needs to be not just about topical focus but about the overarching technological level
This is asinine. Science is a convergent, not a divergent, endeavor. Increased knowledge in one field necessarily alters or amplifies the impact of knowledge in another. I said nothing to contradict this and gave several examples of it being affirmed.
Regarding your predictionbook listings: I put a very low probability of resveratrol hitting the public market within twenty years, but only because I am familiar with the FDA approvals process and how convoluted it is. I’d estimate more like in the 20′s-40′s for the 2025 date, and I do not believe it to be possible at this point for the 2019 date. I don’t find your estimate of the millennarian proposal exceptional.
It could turn out that humans already do something to their cells that mimics most of the effects of reservatrol.
This is a testable hypothesis and it has already been falsified. We share cellular metabolism with calorie unrestricted organisms, and not with CR-organisms. Furthermore, while human lifespans are longer than most mammals (not all but most), they certainly aren’t by any means exceptional for even warm-blooded organisms in general.
Moreover, one can easily conceive of circumstances where many of the proposed anti-aging systems just won’t work
Sure. But that’s irrelevant. With the topic having received, finally, mainstream attention—we’ve gone from the periodic instance of the isolated potential investigation to the spaghetti method: throw everything at the wall and see what sticks.
That being said:
[The claim that within fifty years from today, antiagapics research will have extended human lifespans by at least another fifty years] is in context a bit stronger than this because you seem to be claiming not that people born fifty years from now will have a lifespan extension of another fifty years but that this will apply to people who are already old.
Nowhere did I ever make that claim. The closest you might come to this is the fact that I was applying it to people my own age. I am currently thirty. If resveratrol is approved by the time I am fifty (i.e.; 2031), then my taking it at that point will (assuming resveratrol extends remaining life expectancy by 50%), will extend what would otherwise be roughly 30 years to 45 years. Should tissue-cloning further provide genetically-identical-but-metabolically-barely-post-adolescent organs, then I should expect that this would provide heightened vitality to extend at least another ten to fifteen years of life. Etc., etc..
If this problem is severe enough, it is possible that there are diseases which will show up in the very elderly that we haven’t even noticed yet because the population of such people is just too small.
That’s a common field of topic for geriatrics-study in general. Topically reproductive fitness tends to drop to zero sometime before the age of sixty in humans. Yet, when health impediments and nutrition are eliminated as problems (1st-world-countries), women still tend to live a few years longer than men. Most conjecture on this has it that women are ‘more useful’ than men evolutionarily even at older ages: grandmas can care for the little ’uns which lets mom make the food. Etc., etc..
A lot of the behavioral trends, patterns, and complications associated with senescence in humans are very well understood: that is, after all, the entire focus of geriontology.
A major theme of the book is that there is a strong modern interest in combating aging.
But that’s not a point we disagree on. The disagreement point seems to be how much historical interest there was.
Your points 2 and 4 above in that regard are not accurate. And Stipp’s book reflects that. In the very first chapter he notes how early 20th century gland implantation attempts were taken seriously by major, respected scientists. He includes as examples Charles Brown-Sequard.
I agree that there’s more interest in anti-aging than there has been in the past, but the claim that prior attempts at anti-aging were not considered worthwhil endeavors or were restricted to the fringe simply aren’t accurate.
This is asinine. Science is a convergent, not a divergent, endeavor. Increased knowledge in one field necessarily alters or amplifies the impact of knowledge in another. I said nothing to contradict this and gave several examples of it being affirmed.
I don’t understand what you mean by this, and all attempts to parse it don’t seem to produce responses that make sense as a response to my remark. Can you rephrase or expand what you mean here?
This is a testable hypothesis and it has already been falsified. We share cellular metabolism with calorie unrestricted organisms, and not with CR-organisms.
The hypothesies has definitely not been falsified. There’s evidence against it. But that isn’t the same thing at all. It isn’t just our cellular metabolism that matters in this context. We don’t fully understand how reservatrol extends lifespan. We know that it has effects similar to caloric restriction, but without a better understanding of how it is doing that we don’t know.
This is also but one of a variety of things that could go wrong. Consider for example that we use a reservatrol-like compound for a while and then it turns out that something in the human diet that isn’t in our lab environment interferes with it. These are but two examples.
Furthermore, while human lifespans are longer than most mammals (not all but most), they certainly aren’t by any means exceptional for even warm-blooded organisms in general.
In general maximal lifespans are roughly correlated with size in mammals. But, compared to most large mammals great apes have a larger lifespan even given good medical care to those species. Of all the great apes, this disparity is strongest with humans. Chimpanzees, one of humans closest living relativies, have a maximal lifespan of around sixty years, approximately half that of the human. Chimps size is smaller than humans on average, but not by that much. Similarly, bears are normally larger than humans and the other great apes, yet the oldest age for bears is around 45 years. Humans seem to have adopted some aspects that are pushing our maximal age up that aren’t in most other mammals.
With the topic having received, finally, mainstream attention—we’ve gone from the periodic instance of the isolated potential investigation to the spaghetti method: throw everything at the wall and see what sticks.
Sure, that’s more likely to work. But that’s a far cry from every probable world-line from here either involving civilizational collapse or absolute success. Consider the possibility that we spend thirty years trying to throw all the spaghetti on the wall and none of it sticks It may be that I’m reflecting my own biases since I’m in a field (math) where lots of smart people can spend a lot of time working on a problem and it can still take hundreds of years to solve.
Nowhere did I ever make that claim.
Yes, I see. You didn’t make this claim. I misinterpreted your current age. The claim being made is weaker than I thought, although still a very strong claim.
I don’t fully understand how your last paragraph is relevant to the bit it is responding to.
I find myself confused as to how anyone could legitimately, upon reading what you claim to have read, hold forth the stance you claim to hold. I find it to be internally and externally inconsistent with the axiomatic assumption of rational honesty.
In general maximal lifespans are roughly correlated with size in mammals. But, compared to most large mammals great apes have a larger lifespan even given good medical care to those species.
Average human lifespan (with medical care) is roughly analogous to the average human lifespan of the elephant, so that’s not exceptional. But I didn’t limit my statement to mammals but to endothermal.
Consider the possibility that we spend thirty years trying to throw all the spaghetti on the wall and none of it sticks It may be that I’m reflecting my own biases since I’m in a field (math) where lots of smart people can spend a lot of time working on a problem and it can still take hundreds of years to solve.
Attempts to project from the past into the future are doomed to failure when they do not take into account the present. When attempting to extrapolate and make predictions it is always a modal failure to use an incorrect model.
Medicine is not like math. It is far more like engineering.
I don’t fully understand how your last paragraph is relevant to the bit it is responding to.
It was a selected example of the products of geriontology with regards to the behaviors of longevity in humans as they approach senescence. It was, therefore, a direct rebuttal of your hypothetical.
This is asinine. Science is a convergent, not a divergent, endeavor. Increased knowledge in one field necessarily alters or amplifies the impact of knowledge in another. I said nothing to contradict this and gave several examples of it being affirmed.
I don’t understand what you mean by this, and all attempts to parse it don’t seem to produce responses that make sense as a response to my remark. Can you rephrase or expand what you mean here?
… Increased knowledge in one field necessarily alters or amplifies the impact of knowledge in another. Are you familiar with the concept of scientific convergence?
This is a testable hypothesis and it has already been falsified. We share cellular metabolism with calorie unrestricted organisms, and not with CR-organisms.
The hypothesies has definitely not been falsified. There’s evidence against it.
The hypothesis was “maybe humans already have calorie-restricted metabolisms”. This has been falsified. This is not even remotely questionable. It’s false. We’ve got proof on the matter.
We don’t fully understand how reservatrol extends lifespan. We know that it has effects similar to caloric restriction, but without a better understanding of how it is doing that we don’t know.
You’re changing topics here. Resveratrol isn’t the same topic as the hypothetical. How resveratrol works or doesn’t work has absolutely nothing to do with the question of “do humans already have a calorie-restricted metabolic cycle?”—saving that if resveratrol induces that cycle and we have no other proof on the matter, it would answer the question definitively negatively. Thankfully, we don’t need to rely upon resveratrol to answer that question. It’s already been answered.
Consider for example that we use a reservatrol-like compound for a while and then it turns out that something in the human diet that isn’t in our lab environment interferes with it.
This is juvenile. There isn’t a single thing on the planet that everyone consumes besides air and probably water. After that, it’s isolation and elimination from diets. And that’s how science is done. C’mon now—let’s try to restrict our hypotheticals to things that are at least remotely plausible within the current world-line?
The claim being made is weaker than I thought, although still a very strong claim.
It is a strong claim which emerges from a large series of independent small claims each with high probability individually. There is a saying; “three ifs ruin a plan”. You’d have to have far more than three “ifs” for my claim to turn out to be wrong. Far, far more than.
The Wikipedia quote is unsourced. (My main source of knowledge about Brown-Sequard’s involvement is Stipp’s book and a few other minor sources none of which mention the derisive elixir claim.)
I find myself confused as to how anyone could legitimately, upon reading what you claim to have read, hold forth the stance you claim to hold. I find it to be internally and externally inconsistent with the axiomatic assumption of rational honesty.
Is this intended as a personal attack or is there some other intention? This looks a bit like a series of boo lights apparently directed at me. But I’m already convinced that you and I have sufficiently different communication styles that I may be simply misreading this. What do you mean by “axiomatic assumptions of rational honesty” and how do you think what I have said is “internally” and “externally” inconsistent? Trying to interpret this, I’m getting something like “I don’t think an individual who is trying to be rational and have an intellectually honest dialogue could distort sources the way you are doing.” Is that what you mean to say? I can sort of see how you might get that if you thought I was using the Wikipedia article as a source. I just linked to it because although he was a major scientist around 1900, he’s fairly obscure now. I didn’t read it. Does that possibly explain the confusion here?
Average human lifespan (with medical care) is roughly analogous to the average human lifespan of the elephant, so that’s not exceptional. But I didn’t limit my statement to mammals but to endothermal.
Right, and I limited mine to mammals since it is a lot easier to look in that context, since the metabolisms are pretty different. Note by the way that your example of an elephant actually shows my point pretty well: elephants are an order of magnitude larger than humans yet humans have maximal lifespans slightly exceeding that of an elephant. The oldest elephants have been a bit over 80 years old. That’s in captivity with good veterinary care. But even without modern medical care humans have exceeded 110 years (albeit very rarely). There are two obvious issues with this (humans pay a lot more attention to old humans than old elephants, and we have a much larger supply of well-documented humans than we do of well-documented elephants), so it wouldn’t be unreasonable to suspect that with a large enough sample the elephants’ maximal age would look closer to the human maximal age. But then there’s the whole order of magnitude larger thing.
It does seem like great apes and humans in particular are already doing something else that has extended our maximal lifespan.
Attempts to project from the past into the future are doomed to failure when they do not take into account the present
Yes, which is why I asked for explicit descriptions of what data about the present other than there being a lot of people working on these issues that you were using. So what judgments are being made about our current knowledge of biochem that go into your conclusion?
Medicine is not like math. It is far more like engineering.
Sure. I’d agree with that. So let’s look at engineering examples where lots of resources have gone into things. Fusion power would be a really good example. There have been billions of dollars put into fusion power research in the last fifty years (see e.g. this for data on but one source of funding from the US government that has given billions of dollars to fusion research). Famously, fusion power is always about twenty-five years away. Sometimes problems are just tough. Sometimes the wall isn’t as sticky as you think it is.
… Increased knowledge in one field necessarily alters or amplifies the impact of knowledge in another. Are you familiar with the concept of scientific convergence?
Yes. But I still don’t see how this is in any way results in a conclusion that the statement in question “is asinine”. So I’m still confused.
The hypothesis was “maybe humans already have calorie-restricted metabolisms”. This has been falsified. This is not even remotely questionable. It’s false. We’ve got proof on the matter.
Please don’t tell me what my hypothesis was. The comment you were responding to was me raising the possibility that:
It could turn out that humans already do something to their cells that mimics most of the effects of reservatrol.
Notice that this statement says nothing at all about caloric restriction. The rest of your next paragraph is similarly irrelevant.
This is juvenile. There isn’t a single thing on the planet that everyone consumes besides air and probably water. After that, it’s isolation and elimination from diets. And that’s how science is done. C’mon now—let’s try to restrict our hypotheticals to things that are at least remotely plausible within the current world-line?
I don’t see what’s juvenile about this, and it would be slightly appreciated if you would make arguments that didn’t involve unnecessary personal attacks. Your observation about what is commonly consumed is quite accurate. So, if there’s something common in say the Western diet that reduces the effects of reservatrol or some similar compound, we might not even notice until we notice that the anti-aging compound is having much less of an effect than anticipated. And then you’d need to go and test those compounds. That sort of problem falls easily in the “remotely plausible” line of failures or in the “conceivable” world-lines you mentioned earlier.
It is a strong claim which emerges from a large series of independent small claims each with high probability individually. There is a saying; “three ifs ruin a plan”. You’d have to have far more than three “ifs” for my claim to turn out to be wrong.
That’s fine if your claim is “this is likely to work” or even “this is very likely to work.” But you’ve claimed that there’s no conceivable world-line where this isn’t working in fifty years and there hasn’t been societal collapse or something similar preventing the research.
At this point, I have to wonder if we are running into problems of inferential distance. This is why I tried to ask you to make your earlier chain of logic explicit. Reducing it to premises like the non-supernatural nature of biology and then saying etc. is not helpful for bridging inferential distance gaps.
Is this intended as a personal attack or is there some other intention? This looks a bit like a series of boo lights apparently directed at me.
If I say I am confused, then I mean that I am confused.
What do you mean by “axiomatic assumptions of rational honesty”
I mean that I take it as an axiomatic principle that my conversants are honest and rational actors until such time as they demonstrate otherwise.
and how do you think what I have said is “internally” and “externally” inconsistent?
Internally inconsistent means that the statements contradict themselves. Externally inconsistent means the statements contradict the known facts outside the statements.
You and I have the same datasets available for this conversation. You claim that you have read Stipp’s book and yet you still claim that there has been strong historical interest in antiagapics research within the mainstream community. Stipps book contradicts this claim.
This is an internally inconsistent claim. You then go on to make many externally inconsistent statements such as claiming that the question of whether humans operate on calorie-restricted metabolism is yet at question, or the claim that geriontology’s studies of human longevity have sufficiently little to do with determining the maximal human lifespan that you are confused by why I would even bring it up.
These are all points that tend to lead towards the conclusion of dishonesty or irrationality on your part. I’m not trying to claim that I have made that conclusion, just that I am confused as to how it is possible that you are not being dishonest or irrational—because I am continuing to operate on the axiomatic assumption that you are in fact honest and rational.
So, if there’s something common in say the Western diet that reduces the effects of reservatrol or some similar compound, we might not even notice until we notice that the anti-aging compound is having much less of an effect than anticipated. And then you’d need to go and test those compounds. That sort of problem falls easily in the “remotely plausible” line of failures or in the “conceivable” world-lines you mentioned earlier.
No, it doesn’t. No pharmaceutical will receive widespread adoption until such time as it has been rigorously studied for how it behaves and what its contraindications and/or interactions are. That includes diets. These are all things that are normally controlled for. There is nothing “remotely plausible” about your proposed scenario: the entire pharmaceutical approvals process would have to be abandoned for it to occur.
That’s fine if your claim is “this is likely to work” or even “this is very likely to work.” But you’ve claimed that there’s no conceivable world-line where this isn’t working in fifty years and there hasn’t been societal collapse or something similar preventing the research.
There isn’t any one single this, is my point. Science and medicine are converging significantly. There is now a concerted effort to solving this particular problem. The scale of extension is, compared to what is conceivable possible, very minor.
My claim is that there are so many things which can be stated to be very likely to work that the idea of all of them failing would require a total overhaul of several fundamental models that I hold to be true based on their history of providing valid conclusions.
I don’t believe that the problem we have here is one of inferential distance. I am very familiar with what that problem looks like. What we have instead is the fact that somehow we are both operating with the same sets of available data yet reaching different conclusions.
Aumann’s Agreement Theorem has something to say about that—and with that I suppose I am now verging into the territory of claiming dishonesty/irrationality on your part. (I admit it could conceivable be on my part as well, but I have as yet no indications towards inconsistency in any of my statements aside from the assumption of rational-honesty on your part.)
I mean that I take it as an axiomatic principle that my conversants are honest and rational actors until such time as they demonstrate otherwise.
I don’t know precisely what you mean by rational in this context. Given your invocation of Aumann’s theorem below, I presume you mean something like “close to perfect Bayesians.” This is a really bad idea. Humans can try to be more rational, but they are far from rational. This is not only a bad assumption about people around you, it is a bad assumption about youself. Even weaker assumptions of rationality for people around you are probably a bad idea. Most humans you will talk to will have only some passing desire to be rational, and even those who have moe desire for rationality will know very little about things like the illusion of transparency, the Dunning-Kruger effect, the conjunction fallacy, underestimating inferrential distance, or a host of other cognitive biases that even smart humans commonly fall pray to. Assuming rationality even in a weak sense is a quick way to have an unproductive conversation. Weak assumptions of rationality might make sense on LW but even here they need to be very weak. I think you are misinterpreting a number of my comments and have other issues with what I’ve brought up, so your apparent contradiction with your axiom doesn’t exist in this context, but your axiom is probably a pretty bad idea anyways. (I can even point to an example where I explicitly engaged in a wretched conjunction fallacy on another thread here.)
You then go on to make many externally inconsistent statements such as claiming that the question of whether humans operate on calorie-restricted metabolism is yet at question, or the claim that geriontology’s studies of human longevity have sufficiently little to do with determining the maximal human lifespan that you are confused by why I would even bring it up.
I didn’t do that. I literally in the post you are replying to pointed out explicitly what I said that you intrepreted that way and how it wasn’t that statement.
The hypothesis was “maybe humans already have calorie-restricted metabolisms”. This has been falsified. This is not even remotely questionable. It’s false. We’ve got proof on the matter.
Please don’t tell me what my hypothesis was. The comment you were responding to was me raising the possibility that:
It could turn out that humans already do something to their cells that mimics most of the effects of reservatrol.
Notice that this statement says nothing at all about caloric restriction.
I’m not sure what to do at this point other than just repeat the above line but that seems unproductive. So instead I’ll engage in some cheap praeteritio.
No pharmaceutical will receive widespread adoption until such time as it has been rigorously studied for how it behaves and what its contraindications and/or interactions are. That includes diets. These are all things that are normally controlled for. There is nothing “remotely plausible” about your proposed scenario: the entire pharmaceutical approvals process would have to be abandoned for it to occur.
There have been multiple examples of drugs having serious side effects that have not been caught in testing. Moreover, this doesn’t need to occur for this to be a problem. All that is required is the weaker result of the drugs turning out to simply not work on humans nearly as well due to some common thing in our diet. Since testing these sorts of things in humans requires extremely long-times to see if there’s an empirical effect (on the order of magnitude which humans age at) one could easily not catch the problem for some time.
What we have instead is the fact that somehow we are both operating with the same sets of available data yet reaching different conclusions.
Are we operating on the same data sets? Certainly they overlap a lot, but it looks like I’m operating with a much stronger attitude about general problems of overconfidence.
Aumann’s Agreement Theorem has something to say about that
I generally don’t like the invocations of Aumann’s theorem that are so common LW. They aren’t mathematically justified. The theorem applies to perfect Bayesians after they’ve exchanged all their data. The general idea that somewhat rational people who disagree should update in the direction of each other has valid arguments for it, but Aumann is not a very strong one.
But let’s pretend for a minute that it is a good argument or accept (since we both seem to agree on it) that people trying to be rational should probably update their confidence levels towards each other. If as you claim, you are assuming that I’m rational, have you updated in my direction? If not, why not? If so, by how much?
I thought it was that they should reach the same conclusions. Both might increase or decrease their confidence in a proposition, in which case only one at most would be moving in the direction the other began at.
Right, this is another reasons that Aumann is not a good argument for what people use it for here. One can actually construct situations where they might radically alter their estimates in either direction depending on their priors and estimates. Most of the time it is used here people seem to imply that people should update in each others directions. There are good reasons for that that aren’t Aumann.
Related to this, I don’t know of anything in the literature that precisely talks about almost-Bayesians in any useful way, but I strongly suspect that if it can be made more precise, one will be able to show that under reasonable assumptions more often than not the right thing will be to update towards each other. This follows from something like the averaging trick for estimating statistics. But I don’t know how to make this more precise.
I wonder if one could take Bayesian learning systems with different sets of training data or something similar and then see how they should update on the full set. It might be interesting to do this in a sort of Monte Carlo situation to see for reasonable distributions empirically how Bayesians should generally move.
That seems like a very valid point. Humans keep all sorts of reference classes and prior experience in the back of their minds. Much of it probably is so far back that making it conscious or even realizing which prior data is informing the conclusions could be quite difficult. At some level when people speak of intuition about things like math problems they are talking about precisely this sort of thing.
One can actually construct situations where they might radically alter their estimates in either direction depending on their priors and estimates.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
I invoked it in our conversation because you keep insisting that you’ve seen the same materials and generally agree with me on all points of the possibility of longevity augmentation and the nature of physics, chemistry, and biology.
That you and I are not “perfect Bayesians” is irrelevant to the condition of Aumann’s Agreement being applicable to our situation. The distance we have from such ‘perfection’ should be mappable to the distance we have from agreeing with the ‘ideally rational analysis’ of our strongly overlapping set of priors.
The trouble is, we seem to be diametrically opposed in this conversation. Which means, necessarily, that one of us has strayed rather far from that ideal. Which is a needlessly-complicated way of saying exactly what I’d already said.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
Huh? No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
I invoked it in our conversation because you keep insisting that you’ve seen the same materials and generally agree with me on all points of the possibility of longevity augmentation and the nature of physics, chemistry, and biology.
Well, we agree on the major things that pretty much everyone LW would agree on. We seem to agree on slightly more than that. But on other issues, especially meta issues like “how good in general are humans at estimating technological progress” we do seem to disagree. We also apparently disagree on how likely it is for a large number of scientists and engineers to focus on an area and not succeed in that area. The fusion power example I gave earlier seems relevant in that context.
That you and I are not “perfect Bayesians” is irrelevant to the condition of Aumann’s Agreement being applicable to our situation. The distance we have from such ‘perfection’ should be mappable to the distance we have from agreeing with the ‘ideally rational analysis’ of our strongly overlapping set of priors.
This is not obvious to me. I think it is probably true for near-perfect Bayesians by some reasonable metric, but I’m not aware of a metric to measure how Bayesian non-Bayesians are. Moreover, there are many theorems in many areas where if one weakens the premises a small amount the results of the theorem don’t just fail but fail catastrophically. Without a better understanding of the underlying material I don’t think I can judge whether this is correct. My impression (very much not my area of math) is that very little has been done trying to look at how imperfect Bayesians should/will behave.
The trouble is, we seem to be diametrically opposed in this conversation.
That seems like an extreme phrasing. The disagreement in question is whether in the next fifty years substantial life-extension is likely or that it is so likely that all conceivable world-lines from here which don’t result in civilization collapsing will result in substantial life extensions in the next fifty years. Diametrically opposed I think would be something like your view v. the view that life-extension is definitely not going to happen in the next fifty years barring something like aliens coming down and giving us advanced technology or some sort of Friendly AGI going foom.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
Huh? No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
If and only if their priors do not match one another. That’s the whole point of Aumann’s Agreement Theorem. For which there are proofs.
But on other issues, especially meta issues like “how good in general are humans at estimating technological progress” we do seem to disagree.
Not significantly. It is my belief that humans are especially poor at this.
We also apparently disagree on how likely it is for a large number of scientists and engineers to focus on an area and not succeed in that area. The fusion power example I gave earlier seems relevant in that context.
I honestly do not recall having seen it, but these threads have gotten rather larger than my available attention span and/or recall capacity in general anyhow. That being said, the fusion power problem is actually a very good example of this. The overwhelming majority of the endeavor in that field has gone into Tokamak-style fusion power generation. There is broad acceptance of the idea that once the materials science has reached a specific state, Tokamak reactors will achieve “power-out” state. There is far less agreement in fusion energy research on just about every other approach, and those approaches have received marginal attention.
Contrastingly, there are a wide array of avaialable solutions to practical antiagapics, and many of them have had basic demonstrations of viability of the underlaying ‘dependent’ technologies. (Caloric restriction is documented, we have genetic samples of populations of various longevities to analyze for potential pharmaceuticals, we can already clone tissues and have a tool that allows the assembly of tissues into organs, there is a steady and progressive history of ever-more-successful non-biological implants, etc., etc., etc..). This renders antiagapics into a significantly different sort of endeavor than fusion. In fusion research, the “throw everything on the wall and see what sticks” approach simply hasn’t been used. And that is what I have espoused as the source of my confidence in my assertion.
Diametrically opposed I think would be something like your view v. the view that life-extension is definitely not going to happen in the next fifty years barring something like aliens coming down and giving us advanced technology or some sort of Friendly AGI going foom.
We’re in a binary state here. I assert P(X)=~1 is true, you assert this is false. These positions are diametrical / binary opposites.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
Exchanging relevant information is exceedingly difficult when both parties can relate the same statements of fact and cite the same materials as necessarily resulting in opposing conclusions.
I don’t think this conversation’s going to go anywhere further at this point, by the way, so this is going to be the last comment I make in this particular thread.
No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
If and only if their priors do not match one another. That’s the whole point of Aumann’s Agreement Theorem. For which there are proofs.
I’m not sure what you mean here. For any epsilon >0 I can start with two Bayesians who share the same priors and have different estimates for different statements (before their probabilities become common knowledge) and a hypothesis H such that that before the updating P(H) < eps, for both and after the updating P(H) > 1- eps for both. I’m not at all sure why you think I need two sets of priors to pull this off. Nothing in this contradicts Aumann’s theorem.
Also, you are wrong about Aumann’s theorem. It isn’t an iff, the implication goes only one way. You can start off with different Bayesians who have different priors and who after updating get the same posteriors. Aumann simply is talking about the case where they have the same priors. It says nothing about what happens if they have different priors. In fact, there are weaker theorems about limiting behavior in certain contexts where the priors disagree but as long as they aren’t too pathological you can get as the number of observations increases they start to agree.
A toy example that may help here:
Assume that there is a coin. Alice and Bob have different priors about this coin. Alice assigns a 25% chance that the coin is fair, a 20% that the coin always turns up heads, a 25% chance that the coin always turns up tailss and a 30% chance that the coin turns up heads 2/3rds of the time. Bob has a 20% chance that the coin is fair, a 25% chance that the coin always turns up heads, a 20% chance that the coin always turns up tails, and a 30% chance that the coin turns up heads 2/3rds of the time. Now, first consider what happens if on the first flip the coin turns up heads. Bob and Alice will now assign zero probability to the possibility that the coin always turns up tails. They now agree on that possibility. Furthermore, assume they keep flipping the coin and observing the results. Then it isn’t that hard to see that as long as the coin actually is one of the four options in the limiting situation Alice and Bob will agree. And you can explicitly state with what probability you should expect any given degree of disagreement between them.
I honestly do not recall having seen it
This suggests to me that you may not be paying that much attention what others (especially I) have written in reply to you. It may therefore may make sense to go back and reread the thread when you have time.
That being said, the fusion power problem is actually a very good example of this. The overwhelming majority of the endeavor in that field has gone into Tokamak-style fusion power generation
This seems to be a valid point. The method of approach doesn’t to some extent look like the approach to anti-aging research in so far as most of the research is focusing on a single method. But I’m not convinced that this argument is that strong. There’s also been a fair bit of research into laser confinement fusion for example. And before it became apparent that they could not be efficient enough to provide power, Farnsworth style fusors were also researched heavily. Industry still researches scaling and making Farnsworth fusors more efficient because they make very nice portable neutron generators that one can turn on and off. So while the majority of the research funding has been going to toroidal magnetic confinement there’s been a very large amount of money put into other types. It is only in the context of percentage of the total that the amount put in looks small.
We’re in a binary state here. I assert P(X)=~1 is true, you assert this is false. These positions are diametrical / binary opposites.
By this definition any two people who disagree about a probability estimate our diametric. This seems like not a good definition if one wants to capture common intuition of the terms. Certainly in contrast for example I don’t think that if you told someone in the general public that “this person thinks that life extension is likely in the next fifty years and this other person considers it to be a near certainity” that they would describe this as diametric opposition.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
Exchanging relevant information is exceedingly difficult when both parties can relate the same statements of fact and cite the same materials as necessarily resulting in opposing conclusions.
Well, yes. That’s part of the problem. Humans have massive amounts of information that they’ve moved into their background processing. I have pretty decent intuition for certain classes of mathematical problems. But that’s from accumulated experience. I can pretty reliably make conjectures about those classes of problems. But I can’t point explicitly to what is causing me to do so. It is possible that we have differing background sets of data that are impacting our processing at a base level.
I don’t know precisely what you mean by rational in this context. Given your invocation of Aumann’s theorem below, I presume you mean something like “close to perfect Bayesians.”
I have no such specially-tailored definition. The generics are sufficient for my context. I strongly dislike using specially-tailored definitions of widely used terms without first openly declaring that this is being done. It’s an “underhanded” sort of thing to do in conversation, and counterproductive.
This is a really bad idea. Humans can try to be more rational, but they are far from rational.
A. Far from ideally rational. “Rational” is not a binary state; it is a relative one. One can be, as you note, “more” or “less” rational.
B. You are badly failing to properly parse my statement. I make the axiomatic assumption of rationality and honesty in those I interact with until such time as they give me reason to believe otherwise of them.
I think you are misinterpreting a number of my comments and have other issues with what I’ve brought up,
I can see why you’d think I’m misinterpreting you. The trouble is, I don’t believe I am, and in each instance where you’ve raised an objection or I have, I have provided the reasoning and motivation for that instance (insofar as I can recall at this instant.) On more than one occassion, contrastingly, I have made plain-language statements which you have failed to parse or have taken to mean very strongly different things than they otherwise did. You routinely introduce new and unnecessary assumptions into my statements, thereby altering their meanings, and you have left out elements of my statements, also thereby altering their meanings. One such example of this is my relating to you of my axiomatic assumption of rationality and honesty, which I clearly related as being conditional to contra-evidence. You proceded to lecture me on how bad a notion it is to not allow for irrationality in others. This can only mean that you did not, in fact, properly parse my statement… despite the clear and plain language I used to make it.
Please don’t tell me what my hypothesis was. The comment you were responding to was me raising the possibility that:
It could turn out that humans already do something to their cells that mimics most of the effects of reservatrol.
Notice that this statement says nothing at all about caloric restriction.
Not directly, but resveratrol itself is mimicing the effects of caloric restriction. Once again, this is something we had both already agreed upon being the case for this dialogue. So yes, I’m telling you what your hypothesis is. And I am right to do so—because I am using your own statements in an internally consistent manner to themselves.
This, by the way, represents another instance of you making an internally inconsistent statement.
Are we operating on the same data sets? Certainly they overlap a lot, but it looks like I’m operating with a much stronger attitude about general problems of overconfidence.
We are. You, however, are introducing the assumption—unecessary and unwarranted based on the currently available datasets for this dialogue—that overconfidence is yet relevant to the conversation.
We simply haven’t gotten anywhere that justifies the belief or assumption of overconfidence.
If as you claim, you are assuming that I’m rational, have you updated in my direction? If not, why not? If so, by how much?
Unfortunately, I’m being forced to update my beliefs in the direction of the elimination of the assumption. There’s too many internally inconsistent statements you continue to make, and you claim points of fact as evidence for positions that the datasets in question directly contradict. (Case in point: your continued use of Stipp’s text as a justification of the belief in mainstream medical acceptance of antagapics research. Additionally: Your notional rejection of the research done by geriontologists on the behaviors of aging as being relevant to the question of how aging occurs.)
What it means for someone to be rational doesn’t really have a good generic. There’s some intuition behind it that seems ok. But even that is simply a bad premise to use. It doesn’t apply to most humans. Assuming even an approximate degree of rationality is probably not a justified assumption. You seem to be making a point that you are willing to update to conclude that someone isn’t that rational. But whether you are willing to update or not isn’t terrible relevant. I can assign a low probability to the sun rising each day and every time it does rise update accordingly. This isn’t a good approach.
On more than one occassion, contrastingly, I have made plain-language statements which you have failed to parse or have taken to mean very strongly different things than they otherwise did.
This suggests that we have different ideas of what constitutes plain language or have severely different communication styles. In such contexts it helps to just spell everything out explicitly. It does seem that there’s a similar problem going both ways. Look for example at how you seem to have interpreted my comment about not assuming rationality in others. In this context it seems that you seem to think that I parsed your statement as you insisting that everyone is rational. In contrast if you reread what I wrote you’ll see that I understood you perfectly and was objecting to the assumption as a starting assumption whether or not you would then update given evidence. The fact that as discussed in the other piece of this thread you totally missed my remark about fusion power suggests that there are some serious communication gaps in this conversation.
Not directly, but resveratrol itself is mimicing the effects of caloric restriction. Once again, this is something we had both already agreed upon being the case for this dialogue.
I’m not sure where we agreed to this claim. Resveratrol has features that do look very similar to those caloric restriction, but I don’t see anywhere we agreed to the claim that it is mimicking caloric restriction. Could you point to me where we agreed to that? I’ve just looked over the thread and I don’t see anywhere where I made that statement Since you’ve stated that you are likely not going to be continuing this conversation, can someone else who is reading this comment if they thought I said so and if so point where I did so? It is possible that I said something that is being interpreted that way. I’ve tried rereading the conversation to find that point and there’s nothing that seems to do that. But this could very well be due to the illusion of transparency in reading my own words.
Your notional rejection of the research done by geriontologists on the behaviors of aging as being relevant to the question of how aging occurs.
So this seems again to be a failure to communicate between us. I didn’t reject that; their data is very important to how aging occurs. There data as it exists simply doesn’t rule out the possibility I outlined to you for very old ages That’s not the same thing at all.
We are. You, however, are introducing the assumption—unecessary and unwarranted based on the currently available datasets for this dialogue—that overconfidence is yet relevant to the conversation.
We simply haven’t gotten anywhere that justifies the belief or assumption of overconfidence.
Ok. This is actually a major issue: the outside view for human predictions about technology is that they are almost always overconfident when they make predictions. Humans in general have terrible overconfidence problems. Even if one had absolutely no specific issue with a given technology one would want to move any prediction in the direction of being less confident. We don’t need to go anywhere to justify a suspicion of overconfidence. For an individual who is marginally aware of human cognitive biases, that should be the default setting.
It was a selected example of the products of geriontology with regards to the behaviors of longevity in humans as they approach senescence. It was, therefore, a direct rebuttal of your hypothetical.
I don’t see how they rebut the hypothetical in question that you were responding to.
I said:
There’s also a general evolutionary issue at work here: If a species has a maximal lifespan of around L, then if there are any ways to get more reproductive fitness at a younger age that create problems sometime around or after L. So, one should expect not just one thing to go wrong as a species ages, but lots of things to go wrong. In fact, this is close to what one sees. If this problem is severe enough, it is possible that there are diseases which will show up in the very elderly that we haven’t even noticed yet because the population of such people is just too small.
You then said:
That’s a common field of topic for geriatrics-study in general. Topically reproductive fitness tends to drop to zero sometime before the age of sixty in humans. Yet, when health impediments and nutrition are eliminated as problems (1st-world-countries), women still tend to live a few years longer than men. Most conjecture on this has it that women are ‘more useful’ than men evolutionarily even at older ages: grandmas can care for the little ’uns which lets mom make the food. Etc., etc..
I don’t see how this response makes sense. It would be a response if I had been talking about evolution having certain effects post the age of reproduction. But in my comment I was talking about probable maximal age. These are not the same thing. Indeed, you seem to be just trying to argue that they aren’t. That doesn’t address my statement at all.
It would be a response if I had been talking about evolution having certain effects post the age of reproduction. But in my comment I was talking about probable maximal age.
Evolution has effects past the age of reproduction. Such as determining the probable maximal age of the organism. If there is group fitness to be contributed by an individual organism to the whole even without reproduction involved, then that individual organism’s survival is selected for—even though it itself reproducing isn’t. (This is an ancillary to kin-selection.) Average maximal lifespan of an organism is something that is determined genetically—and thus is a product of evolutionary history and ‘strategy’.
My point, however, wasn’t about evolution in this context—rather, it was about individual fitness metrics for maximal age of the human organism and, more specifically, the efforts of geriontology to uncover that data.
So yes, my statement directly addresses yours. I was specifically pointing out that the thing you were raising as a question is a field that is already well studied.
Your entire first paragraph is trivially true and doesn’t address the point at all. We’re in complete agreement that evolution can impact lifespans well beyond the reproductive age. I think that’s old enough that it is discussed at one point by Darwin although I don’t have the citation off the top of my head.
. I was specifically pointing out that the thing you were raising as a question is a field that is already well studied.
Ah, I see. now. So your point is simply that we know about this issue and that there’s been a lot of study related to it. I don’t see anywhere that would disagree with that claim. I don’t think I said anywhere in my remark anything indicating that the point about evolution pushing problems past the maximal age was at all obscure or original to me. Simply saying that it is a studied issue isn’t sufficient. You need something of the form “this is studied and has been found to be wrong”. But that’s not the case. And it is precisely not the case because there aren’t even now that many really old people. So the essential problem remains: for all we know there could be diseases that show up after around age 100 or older and we haven’t noticed them yet because the sample sizes are small. Moreover, basic evolutionary theory should give us a high prior for such diseases existing because evolution has no reason not make trade offs that benefit at some point before the approximate maximal age in the wild and would cause problems further down the line.
Not my point. I’m not arguing that there isn’t mainstream attention. My point is that there has been mainstream attention before now and that that hasn’t gotten very far. So the outside view is something to the effect that every few decades scientists become much more interested in life-extension, it doesn’t go very far, and then they go do other things.
If all your premises are essentially timeless then one needs to ask if one would have expected this to happen in the past. If for example in 1950, 1900 or 1850, scientists decided to put in a maximal amount of effort into extending human lifespan, do you think they would be as likely to be successful as you think scientists now would be? Maybe you can make that argument for 1950, but I’d be surprised if you’d make that claim about 1900 or 1850. This means that your statement has to include at least one premise involving the level of current medical and scientific knowledge that didn’t apply then.
Edited to add:
Yeah, I seem to be wrong here. According to this survey for rodents there’s more increase on the average than the maximal but there’s a large increase in the maximal age as well.
It couldn’t go very far, technology wasn’t ready. It probably still isn’t ready, but until that’s the case, we wouldn’t expect any progress anyway, so its absence says little.
Right. I completely agree with that. The issue is then how do we know that things are so much more advanced now that Logos can say that he “cannot conceive of a worldline resultant from the here-and-now that would not include the datapoint that within fifty years from today, antiagapics research had extended human lifespans by at least another fifty years.” I’m not arguing that his projection doesn’t look likely from what we know. But the estimate of it being so likely as for alternatives to be inconceivable seems indicate an extreme and unjustified level of confidence.
That’s not even remotely valid. The interest in actual, honest anti-agapic research ongoing now (and for the last five or six years which in terms of medical research is “now”) has never before occurred. It is exactly a new thing.
It certainly has never had widespread mainstream acceptance within the medical research community. (And by the way; inducing clinical immortality in lineages of cells has been going on for several decades. It’s nothing new, and it is not related to antiagapics research in the slightest. If it were, you would have been in the rights to bring up the story of “The Immortal Woman” Henrietta Lacks. Keeping lineages of cells alive is easy. Preventing aging in multicellular organisms is not. They also aren’t related fields of endeavor.
Understanding that keeping cell cultures alive doesn’t have anything to do with preventing aging is a modern understanding that arose after scientists tried to see if they were connected and failed. Contemporary scientists of Carrel for example certainly thought they were related. It is only in hindsight that we’ve realized that that wasn’t the case. The discovery of the Hayflick limit was a major step in realizing that. In fact, that discovery killed off further anti-aging research in the 1960s and it took a while for it to recover.
We seem to have an actual factual dispute here. Or it may be an issue of what you mean by “actual” and “honest”. But for example, work in trying to extend telomeres dates from the 1990s and the basic idea about telomeres dates from the 1970s. I don’t know if this work gets classified if as “actual, honest” by you since it turned out to a large extent to not work as well as people thought it might. Also, in regards to the issue of premises, are you asserting that if scientists in 1900 or 1950 had put in current levels of attention into anti-aging that they would have succeeded at the level you estimate? I suspect not, which means that there has to be some other detail here that’s relevant concerning our current knowledge level. There’s an unstated premise about how much medical/biochem knowledge we have now.
I’ll note incidentally that the argument you are making now is substantially weaker than the argument you made earlier in the thread where you said:
which apparently has now become last five or six years. Does this represent an update of your views on the timeline?
The topic in question is fuzzy/indeterminite. The transition to widespread acceptance from marginalization was not instantaneous. The work currently ongoing began sometime after Resveratrol’s antiagapic effects were discovered.
There were also minor items of research over the years which were conducted—but senescence was, definitively, seen by the mainstream medical community as something to which only palliative care was worthy of researching, as opposed to senescence itself being a treatable condition.
Even that much is somewhat new historically speaking: there were no geriatricians in the US until 1978, for example.
No, its failure to that end has little to nothing to do with the topic at hand. It doesn’t apply for the simple reason that the research in question was not done for the purpose of combating senescence. It was never meant as anything other than a tool for theoretical understanding of the role of telomeres in biology—a topic we still understand only very poorly, with many conflicting ideas about the specific functions and purposes therein.
No. I have no such claim to make. They might have discovered something like resveratrol and ways to make it available to the common public—that is, pharmaceutically mimic the caloric restriction effect—and it’s possible that if we extend this counterfactual scenario to the 80s’/90′s, it’s also possible that some other genetic therapies might have been uncovered/disseminated by now. (For example; work is currently ongoing at the SENS group to determine if senescent white blood cells can be screened from the body in a process similar to dialysis, which would allow the ‘more vital’ cells to repopulate within the patient, thereby restoring a more vital immune system, prolonging disease resistance in the elderly. This is also something that conceptually could have been in widespread application as early as the late 70′s.)
No, there really isn’t. It’s not about that. It’s about topical focus. It’s about old age itself being seen as a topic worthy of being disassembled and its mechanisms researched in order to find solutions to that effect. The failure for this to have occurred until now is not representative of any promise of immediate success upon the conceptual breakthrough but rather a guarantee of the absence of success in the absence of the concept.
I’m reminded of a science fiction short story that delineates this same idea, in a way, though obviously with more “fantastic” results: Turtledove’s “The Road Not Taken”.
I don’t get that impression from my (limited) knowledge of the field. If Stipp’s book is accurate then there have been many attempts in the last hundred years by serious scientists.
but also specific estimates for the current technological level and general knowledge level. So the question becomes then how confident are you in those estimates? Pedictions of sweeping change are usually wrong. Such changes do sometimes occur from technologies, but most of those are things like ubiquitous cell phones or the internet or GPS which are not heavily anticipated or are anticipated in only a rough fashion. The outside view suggests that predictions about major technologies should have their confidences reduced.
This can’t be all that matters if the same result would not have occurred in 1950 or 1900 (or even more to the point 1850). That means that at least one premise needs to be not just about topical focus but about the overarching technological level. To use a weak analogy, post 1945 or so it woud be fair to say that the only issue preventing a major power from getting to the moon was one of topical focus. But that’s because one is taking into account the general tec level. I couldn’t make that claim in 1900 or 1850. And in defending that difference, I have to be able to point to specific technologies that they had in 1945 that they didn’t in 1900.
It seems to me that you are arguing in a way that is useful to convince someone who assigns a very low probability that lifespan extension will occur that they should assign a high probability. That’s not a point I disagree with. The issue is your claim that:
This is in context a bit stronger than this because you seem to be claiming not that people born fifty years from now will have a lifespan extension of another fifty years but that this will apply to people who are already old. This is a difficult position since with the exceptions of prosthetics and tissue-cloning, most of the plausible methods of life extension (such as reservatrol) rely on starting the process when people are young. Even if we have some form of reservatrol-like compound in fifty years that slows aging down to a crawl, someone who is a hundred years old at that point will still have the cells of a centenarian (assuming minimal prior anti-aging treatments).
Moreover, one can easily conceive of circumstances where many of the proposed anti-aging systems just won’t work. For example, humans have a much longer average life-span than most mammals. It could turn out that humans already do something to their cells that mimics most of the effects of reservatrol.
There’s also a general evolutionary issue at work here: If a species has a maximal lifespan of around L, then if there are any ways to get more reproductive fitness at a younger age that create problems sometime around or after L. So, one should expect not just one thing to go wrong as a species ages, but lots of things to go wrong. In fact, this is close to what one sees. If this problem is severe enough, it is possible that there are diseases which will show up in the very elderly that we haven’t even noticed yet because the population of such people is just too small.
To further this discussion, over at Prediction Book I’ve made a series of predictions about age extension related predictions. I’m curious how much and where your estimates disagree with mine. (I’m not including separate links to each, but the ones on that list are the reservatrol predictions and the sets about oldest living people).
Then you should re-read that book—it was espousing exactly my own position in this dialogue!
I never said nobody before had ever tried to do anything that would potentially impact human longevity scientifically.
I did say that antiagapics research as a primary goal has never before this generation existed within the mainstream medical community.
Point #2 was the entire purpose of the book you’re trying to cite as a contradiction of my position.
The overwhelming majority of work done previously was either not directly intended to the effect of preventing aging in humans or else was done by ‘fringe’ actors without generalized support from the general consensus as to their topics being worthwhile endeavors before their results came in. (I add this qualifier to make it clear I’m not talking about after-the-fact ‘well, clearly that was foolish. It didn’t work’ stuff.)
This is asinine. Science is a convergent, not a divergent, endeavor. Increased knowledge in one field necessarily alters or amplifies the impact of knowledge in another. I said nothing to contradict this and gave several examples of it being affirmed.
Regarding your predictionbook listings: I put a very low probability of resveratrol hitting the public market within twenty years, but only because I am familiar with the FDA approvals process and how convoluted it is. I’d estimate more like in the 20′s-40′s for the 2025 date, and I do not believe it to be possible at this point for the 2019 date. I don’t find your estimate of the millennarian proposal exceptional.
This is a testable hypothesis and it has already been falsified. We share cellular metabolism with calorie unrestricted organisms, and not with CR-organisms. Furthermore, while human lifespans are longer than most mammals (not all but most), they certainly aren’t by any means exceptional for even warm-blooded organisms in general.
Sure. But that’s irrelevant. With the topic having received, finally, mainstream attention—we’ve gone from the periodic instance of the isolated potential investigation to the spaghetti method: throw everything at the wall and see what sticks.
That being said:
Nowhere did I ever make that claim. The closest you might come to this is the fact that I was applying it to people my own age. I am currently thirty. If resveratrol is approved by the time I am fifty (i.e.; 2031), then my taking it at that point will (assuming resveratrol extends remaining life expectancy by 50%), will extend what would otherwise be roughly 30 years to 45 years. Should tissue-cloning further provide genetically-identical-but-metabolically-barely-post-adolescent organs, then I should expect that this would provide heightened vitality to extend at least another ten to fifteen years of life. Etc., etc..
That’s a common field of topic for geriatrics-study in general. Topically reproductive fitness tends to drop to zero sometime before the age of sixty in humans. Yet, when health impediments and nutrition are eliminated as problems (1st-world-countries), women still tend to live a few years longer than men. Most conjecture on this has it that women are ‘more useful’ than men evolutionarily even at older ages: grandmas can care for the little ’uns which lets mom make the food. Etc., etc..
A lot of the behavioral trends, patterns, and complications associated with senescence in humans are very well understood: that is, after all, the entire focus of geriontology.
A major theme of the book is that there is a strong modern interest in combating aging. But that’s not a point we disagree on. The disagreement point seems to be how much historical interest there was.
Your points 2 and 4 above in that regard are not accurate. And Stipp’s book reflects that. In the very first chapter he notes how early 20th century gland implantation attempts were taken seriously by major, respected scientists. He includes as examples Charles Brown-Sequard. I agree that there’s more interest in anti-aging than there has been in the past, but the claim that prior attempts at anti-aging were not considered worthwhil endeavors or were restricted to the fringe simply aren’t accurate.
I don’t understand what you mean by this, and all attempts to parse it don’t seem to produce responses that make sense as a response to my remark. Can you rephrase or expand what you mean here?
The hypothesies has definitely not been falsified. There’s evidence against it. But that isn’t the same thing at all. It isn’t just our cellular metabolism that matters in this context. We don’t fully understand how reservatrol extends lifespan. We know that it has effects similar to caloric restriction, but without a better understanding of how it is doing that we don’t know. This is also but one of a variety of things that could go wrong. Consider for example that we use a reservatrol-like compound for a while and then it turns out that something in the human diet that isn’t in our lab environment interferes with it. These are but two examples.
In general maximal lifespans are roughly correlated with size in mammals. But, compared to most large mammals great apes have a larger lifespan even given good medical care to those species. Of all the great apes, this disparity is strongest with humans. Chimpanzees, one of humans closest living relativies, have a maximal lifespan of around sixty years, approximately half that of the human. Chimps size is smaller than humans on average, but not by that much. Similarly, bears are normally larger than humans and the other great apes, yet the oldest age for bears is around 45 years. Humans seem to have adopted some aspects that are pushing our maximal age up that aren’t in most other mammals.
Sure, that’s more likely to work. But that’s a far cry from every probable world-line from here either involving civilizational collapse or absolute success. Consider the possibility that we spend thirty years trying to throw all the spaghetti on the wall and none of it sticks It may be that I’m reflecting my own biases since I’m in a field (math) where lots of smart people can spend a lot of time working on a problem and it can still take hundreds of years to solve.
Yes, I see. You didn’t make this claim. I misinterpreted your current age. The claim being made is weaker than I thought, although still a very strong claim.
I don’t fully understand how your last paragraph is relevant to the bit it is responding to.
No, they aren’t, and no, it doesn’t. I don’t know how to be any more clear on that one.
“In his extreme old age, he advocated the hypodermic injection of a fluid prepared from the testicles of guinea pigs and dogs, as a means of prolonging human life. It was known, among scientists, derisively, as the Brown-Séquard Elixir.”
I find myself confused as to how anyone could legitimately, upon reading what you claim to have read, hold forth the stance you claim to hold. I find it to be internally and externally inconsistent with the axiomatic assumption of rational honesty.
Average human lifespan (with medical care) is roughly analogous to the average human lifespan of the elephant, so that’s not exceptional. But I didn’t limit my statement to mammals but to endothermal.
Attempts to project from the past into the future are doomed to failure when they do not take into account the present. When attempting to extrapolate and make predictions it is always a modal failure to use an incorrect model.
Medicine is not like math. It is far more like engineering.
It was a selected example of the products of geriontology with regards to the behaviors of longevity in humans as they approach senescence. It was, therefore, a direct rebuttal of your hypothetical.
… Increased knowledge in one field necessarily alters or amplifies the impact of knowledge in another. Are you familiar with the concept of scientific convergence?
The hypothesis was “maybe humans already have calorie-restricted metabolisms”. This has been falsified. This is not even remotely questionable. It’s false. We’ve got proof on the matter.
You’re changing topics here. Resveratrol isn’t the same topic as the hypothetical. How resveratrol works or doesn’t work has absolutely nothing to do with the question of “do humans already have a calorie-restricted metabolic cycle?”—saving that if resveratrol induces that cycle and we have no other proof on the matter, it would answer the question definitively negatively. Thankfully, we don’t need to rely upon resveratrol to answer that question. It’s already been answered.
This is juvenile. There isn’t a single thing on the planet that everyone consumes besides air and probably water. After that, it’s isolation and elimination from diets. And that’s how science is done. C’mon now—let’s try to restrict our hypotheticals to things that are at least remotely plausible within the current world-line?
It is a strong claim which emerges from a large series of independent small claims each with high probability individually. There is a saying; “three ifs ruin a plan”. You’d have to have far more than three “ifs” for my claim to turn out to be wrong. Far, far more than.
The Wikipedia quote is unsourced. (My main source of knowledge about Brown-Sequard’s involvement is Stipp’s book and a few other minor sources none of which mention the derisive elixir claim.)
Is this intended as a personal attack or is there some other intention? This looks a bit like a series of boo lights apparently directed at me. But I’m already convinced that you and I have sufficiently different communication styles that I may be simply misreading this. What do you mean by “axiomatic assumptions of rational honesty” and how do you think what I have said is “internally” and “externally” inconsistent? Trying to interpret this, I’m getting something like “I don’t think an individual who is trying to be rational and have an intellectually honest dialogue could distort sources the way you are doing.” Is that what you mean to say? I can sort of see how you might get that if you thought I was using the Wikipedia article as a source. I just linked to it because although he was a major scientist around 1900, he’s fairly obscure now. I didn’t read it. Does that possibly explain the confusion here?
Right, and I limited mine to mammals since it is a lot easier to look in that context, since the metabolisms are pretty different. Note by the way that your example of an elephant actually shows my point pretty well: elephants are an order of magnitude larger than humans yet humans have maximal lifespans slightly exceeding that of an elephant. The oldest elephants have been a bit over 80 years old. That’s in captivity with good veterinary care. But even without modern medical care humans have exceeded 110 years (albeit very rarely). There are two obvious issues with this (humans pay a lot more attention to old humans than old elephants, and we have a much larger supply of well-documented humans than we do of well-documented elephants), so it wouldn’t be unreasonable to suspect that with a large enough sample the elephants’ maximal age would look closer to the human maximal age. But then there’s the whole order of magnitude larger thing. It does seem like great apes and humans in particular are already doing something else that has extended our maximal lifespan.
Sure. I’d agree with that. So let’s look at engineering examples where lots of resources have gone into things. Fusion power would be a really good example. There have been billions of dollars put into fusion power research in the last fifty years (see e.g. this for data on but one source of funding from the US government that has given billions of dollars to fusion research). Famously, fusion power is always about twenty-five years away. Sometimes problems are just tough. Sometimes the wall isn’t as sticky as you think it is.
Yes. But I still don’t see how this is in any way results in a conclusion that the statement in question “is asinine”. So I’m still confused.
Please don’t tell me what my hypothesis was. The comment you were responding to was me raising the possibility that:
Notice that this statement says nothing at all about caloric restriction. The rest of your next paragraph is similarly irrelevant.
I don’t see what’s juvenile about this, and it would be slightly appreciated if you would make arguments that didn’t involve unnecessary personal attacks. Your observation about what is commonly consumed is quite accurate. So, if there’s something common in say the Western diet that reduces the effects of reservatrol or some similar compound, we might not even notice until we notice that the anti-aging compound is having much less of an effect than anticipated. And then you’d need to go and test those compounds. That sort of problem falls easily in the “remotely plausible” line of failures or in the “conceivable” world-lines you mentioned earlier.
That’s fine if your claim is “this is likely to work” or even “this is very likely to work.” But you’ve claimed that there’s no conceivable world-line where this isn’t working in fifty years and there hasn’t been societal collapse or something similar preventing the research.
At this point, I have to wonder if we are running into problems of inferential distance. This is why I tried to ask you to make your earlier chain of logic explicit. Reducing it to premises like the non-supernatural nature of biology and then saying etc. is not helpful for bridging inferential distance gaps.
If I say I am confused, then I mean that I am confused.
I mean that I take it as an axiomatic principle that my conversants are honest and rational actors until such time as they demonstrate otherwise.
Internally inconsistent means that the statements contradict themselves. Externally inconsistent means the statements contradict the known facts outside the statements.
You and I have the same datasets available for this conversation. You claim that you have read Stipp’s book and yet you still claim that there has been strong historical interest in antiagapics research within the mainstream community. Stipps book contradicts this claim.
This is an internally inconsistent claim. You then go on to make many externally inconsistent statements such as claiming that the question of whether humans operate on calorie-restricted metabolism is yet at question, or the claim that geriontology’s studies of human longevity have sufficiently little to do with determining the maximal human lifespan that you are confused by why I would even bring it up.
These are all points that tend to lead towards the conclusion of dishonesty or irrationality on your part. I’m not trying to claim that I have made that conclusion, just that I am confused as to how it is possible that you are not being dishonest or irrational—because I am continuing to operate on the axiomatic assumption that you are in fact honest and rational.
No, it doesn’t. No pharmaceutical will receive widespread adoption until such time as it has been rigorously studied for how it behaves and what its contraindications and/or interactions are. That includes diets. These are all things that are normally controlled for. There is nothing “remotely plausible” about your proposed scenario: the entire pharmaceutical approvals process would have to be abandoned for it to occur.
There isn’t any one single this, is my point. Science and medicine are converging significantly. There is now a concerted effort to solving this particular problem. The scale of extension is, compared to what is conceivable possible, very minor.
My claim is that there are so many things which can be stated to be very likely to work that the idea of all of them failing would require a total overhaul of several fundamental models that I hold to be true based on their history of providing valid conclusions.
I don’t believe that the problem we have here is one of inferential distance. I am very familiar with what that problem looks like. What we have instead is the fact that somehow we are both operating with the same sets of available data yet reaching different conclusions.
Aumann’s Agreement Theorem has something to say about that—and with that I suppose I am now verging into the territory of claiming dishonesty/irrationality on your part. (I admit it could conceivable be on my part as well, but I have as yet no indications towards inconsistency in any of my statements aside from the assumption of rational-honesty on your part.)
I don’t know precisely what you mean by rational in this context. Given your invocation of Aumann’s theorem below, I presume you mean something like “close to perfect Bayesians.” This is a really bad idea. Humans can try to be more rational, but they are far from rational. This is not only a bad assumption about people around you, it is a bad assumption about youself. Even weaker assumptions of rationality for people around you are probably a bad idea. Most humans you will talk to will have only some passing desire to be rational, and even those who have moe desire for rationality will know very little about things like the illusion of transparency, the Dunning-Kruger effect, the conjunction fallacy, underestimating inferrential distance, or a host of other cognitive biases that even smart humans commonly fall pray to. Assuming rationality even in a weak sense is a quick way to have an unproductive conversation. Weak assumptions of rationality might make sense on LW but even here they need to be very weak. I think you are misinterpreting a number of my comments and have other issues with what I’ve brought up, so your apparent contradiction with your axiom doesn’t exist in this context, but your axiom is probably a pretty bad idea anyways. (I can even point to an example where I explicitly engaged in a wretched conjunction fallacy on another thread here.)
I didn’t do that. I literally in the post you are replying to pointed out explicitly what I said that you intrepreted that way and how it wasn’t that statement.
I’m not sure what to do at this point other than just repeat the above line but that seems unproductive. So instead I’ll engage in some cheap praeteritio.
Are we operating on the same data sets? Certainly they overlap a lot, but it looks like I’m operating with a much stronger attitude about general problems of overconfidence.
I generally don’t like the invocations of Aumann’s theorem that are so common LW. They aren’t mathematically justified. The theorem applies to perfect Bayesians after they’ve exchanged all their data. The general idea that somewhat rational people who disagree should update in the direction of each other has valid arguments for it, but Aumann is not a very strong one.
But let’s pretend for a minute that it is a good argument or accept (since we both seem to agree on it) that people trying to be rational should probably update their confidence levels towards each other. If as you claim, you are assuming that I’m rational, have you updated in my direction? If not, why not? If so, by how much?
I thought it was that they should reach the same conclusions. Both might increase or decrease their confidence in a proposition, in which case only one at most would be moving in the direction the other began at.
Right, this is another reasons that Aumann is not a good argument for what people use it for here. One can actually construct situations where they might radically alter their estimates in either direction depending on their priors and estimates. Most of the time it is used here people seem to imply that people should update in each others directions. There are good reasons for that that aren’t Aumann.
Related to this, I don’t know of anything in the literature that precisely talks about almost-Bayesians in any useful way, but I strongly suspect that if it can be made more precise, one will be able to show that under reasonable assumptions more often than not the right thing will be to update towards each other. This follows from something like the averaging trick for estimating statistics. But I don’t know how to make this more precise.
I wonder if one could take Bayesian learning systems with different sets of training data or something similar and then see how they should update on the full set. It might be interesting to do this in a sort of Monte Carlo situation to see for reasonable distributions empirically how Bayesians should generally move.
In my opinion, that humans aren’t Bayesians is much less of a problem than that humans can’t share all their information.
That seems like a very valid point. Humans keep all sorts of reference classes and prior experience in the back of their minds. Much of it probably is so far back that making it conscious or even realizing which prior data is informing the conclusions could be quite difficult. At some level when people speak of intuition about things like math problems they are talking about precisely this sort of thing.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
I invoked it in our conversation because you keep insisting that you’ve seen the same materials and generally agree with me on all points of the possibility of longevity augmentation and the nature of physics, chemistry, and biology.
That you and I are not “perfect Bayesians” is irrelevant to the condition of Aumann’s Agreement being applicable to our situation. The distance we have from such ‘perfection’ should be mappable to the distance we have from agreeing with the ‘ideally rational analysis’ of our strongly overlapping set of priors.
The trouble is, we seem to be diametrically opposed in this conversation. Which means, necessarily, that one of us has strayed rather far from that ideal. Which is a needlessly-complicated way of saying exactly what I’d already said.
Huh? No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
Well, we agree on the major things that pretty much everyone LW would agree on. We seem to agree on slightly more than that. But on other issues, especially meta issues like “how good in general are humans at estimating technological progress” we do seem to disagree. We also apparently disagree on how likely it is for a large number of scientists and engineers to focus on an area and not succeed in that area. The fusion power example I gave earlier seems relevant in that context.
This is not obvious to me. I think it is probably true for near-perfect Bayesians by some reasonable metric, but I’m not aware of a metric to measure how Bayesian non-Bayesians are. Moreover, there are many theorems in many areas where if one weakens the premises a small amount the results of the theorem don’t just fail but fail catastrophically. Without a better understanding of the underlying material I don’t think I can judge whether this is correct. My impression (very much not my area of math) is that very little has been done trying to look at how imperfect Bayesians should/will behave.
That seems like an extreme phrasing. The disagreement in question is whether in the next fifty years substantial life-extension is likely or that it is so likely that all conceivable world-lines from here which don’t result in civilization collapsing will result in substantial life extensions in the next fifty years. Diametrically opposed I think would be something like your view v. the view that life-extension is definitely not going to happen in the next fifty years barring something like aliens coming down and giving us advanced technology or some sort of Friendly AGI going foom.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
If and only if their priors do not match one another. That’s the whole point of Aumann’s Agreement Theorem. For which there are proofs.
Not significantly. It is my belief that humans are especially poor at this.
I honestly do not recall having seen it, but these threads have gotten rather larger than my available attention span and/or recall capacity in general anyhow. That being said, the fusion power problem is actually a very good example of this. The overwhelming majority of the endeavor in that field has gone into Tokamak-style fusion power generation. There is broad acceptance of the idea that once the materials science has reached a specific state, Tokamak reactors will achieve “power-out” state. There is far less agreement in fusion energy research on just about every other approach, and those approaches have received marginal attention.
Contrastingly, there are a wide array of avaialable solutions to practical antiagapics, and many of them have had basic demonstrations of viability of the underlaying ‘dependent’ technologies. (Caloric restriction is documented, we have genetic samples of populations of various longevities to analyze for potential pharmaceuticals, we can already clone tissues and have a tool that allows the assembly of tissues into organs, there is a steady and progressive history of ever-more-successful non-biological implants, etc., etc., etc..). This renders antiagapics into a significantly different sort of endeavor than fusion. In fusion research, the “throw everything on the wall and see what sticks” approach simply hasn’t been used. And that is what I have espoused as the source of my confidence in my assertion.
We’re in a binary state here. I assert P(X)=~1 is true, you assert this is false. These positions are diametrical / binary opposites.
Exchanging relevant information is exceedingly difficult when both parties can relate the same statements of fact and cite the same materials as necessarily resulting in opposing conclusions.
I don’t think this conversation’s going to go anywhere further at this point, by the way, so this is going to be the last comment I make in this particular thread.
I’m not sure what you mean here. For any epsilon >0 I can start with two Bayesians who share the same priors and have different estimates for different statements (before their probabilities become common knowledge) and a hypothesis H such that that before the updating P(H) < eps, for both and after the updating P(H) > 1- eps for both. I’m not at all sure why you think I need two sets of priors to pull this off. Nothing in this contradicts Aumann’s theorem.
Also, you are wrong about Aumann’s theorem. It isn’t an iff, the implication goes only one way. You can start off with different Bayesians who have different priors and who after updating get the same posteriors. Aumann simply is talking about the case where they have the same priors. It says nothing about what happens if they have different priors. In fact, there are weaker theorems about limiting behavior in certain contexts where the priors disagree but as long as they aren’t too pathological you can get as the number of observations increases they start to agree.
A toy example that may help here:
Assume that there is a coin. Alice and Bob have different priors about this coin. Alice assigns a 25% chance that the coin is fair, a 20% that the coin always turns up heads, a 25% chance that the coin always turns up tailss and a 30% chance that the coin turns up heads 2/3rds of the time. Bob has a 20% chance that the coin is fair, a 25% chance that the coin always turns up heads, a 20% chance that the coin always turns up tails, and a 30% chance that the coin turns up heads 2/3rds of the time. Now, first consider what happens if on the first flip the coin turns up heads. Bob and Alice will now assign zero probability to the possibility that the coin always turns up tails. They now agree on that possibility. Furthermore, assume they keep flipping the coin and observing the results. Then it isn’t that hard to see that as long as the coin actually is one of the four options in the limiting situation Alice and Bob will agree. And you can explicitly state with what probability you should expect any given degree of disagreement between them.
This suggests to me that you may not be paying that much attention what others (especially I) have written in reply to you. It may therefore may make sense to go back and reread the thread when you have time.
This seems to be a valid point. The method of approach doesn’t to some extent look like the approach to anti-aging research in so far as most of the research is focusing on a single method. But I’m not convinced that this argument is that strong. There’s also been a fair bit of research into laser confinement fusion for example. And before it became apparent that they could not be efficient enough to provide power, Farnsworth style fusors were also researched heavily. Industry still researches scaling and making Farnsworth fusors more efficient because they make very nice portable neutron generators that one can turn on and off. So while the majority of the research funding has been going to toroidal magnetic confinement there’s been a very large amount of money put into other types. It is only in the context of percentage of the total that the amount put in looks small.
By this definition any two people who disagree about a probability estimate our diametric. This seems like not a good definition if one wants to capture common intuition of the terms. Certainly in contrast for example I don’t think that if you told someone in the general public that “this person thinks that life extension is likely in the next fifty years and this other person considers it to be a near certainity” that they would describe this as diametric opposition.
Well, yes. That’s part of the problem. Humans have massive amounts of information that they’ve moved into their background processing. I have pretty decent intuition for certain classes of mathematical problems. But that’s from accumulated experience. I can pretty reliably make conjectures about those classes of problems. But I can’t point explicitly to what is causing me to do so. It is possible that we have differing background sets of data that are impacting our processing at a base level.
I have no such specially-tailored definition. The generics are sufficient for my context. I strongly dislike using specially-tailored definitions of widely used terms without first openly declaring that this is being done. It’s an “underhanded” sort of thing to do in conversation, and counterproductive.
A. Far from ideally rational. “Rational” is not a binary state; it is a relative one. One can be, as you note, “more” or “less” rational.
B. You are badly failing to properly parse my statement. I make the axiomatic assumption of rationality and honesty in those I interact with until such time as they give me reason to believe otherwise of them.
I can see why you’d think I’m misinterpreting you. The trouble is, I don’t believe I am, and in each instance where you’ve raised an objection or I have, I have provided the reasoning and motivation for that instance (insofar as I can recall at this instant.) On more than one occassion, contrastingly, I have made plain-language statements which you have failed to parse or have taken to mean very strongly different things than they otherwise did. You routinely introduce new and unnecessary assumptions into my statements, thereby altering their meanings, and you have left out elements of my statements, also thereby altering their meanings. One such example of this is my relating to you of my axiomatic assumption of rationality and honesty, which I clearly related as being conditional to contra-evidence. You proceded to lecture me on how bad a notion it is to not allow for irrationality in others. This can only mean that you did not, in fact, properly parse my statement… despite the clear and plain language I used to make it.
Not directly, but resveratrol itself is mimicing the effects of caloric restriction. Once again, this is something we had both already agreed upon being the case for this dialogue. So yes, I’m telling you what your hypothesis is. And I am right to do so—because I am using your own statements in an internally consistent manner to themselves.
This, by the way, represents another instance of you making an internally inconsistent statement.
We are. You, however, are introducing the assumption—unecessary and unwarranted based on the currently available datasets for this dialogue—that overconfidence is yet relevant to the conversation.
We simply haven’t gotten anywhere that justifies the belief or assumption of overconfidence.
Unfortunately, I’m being forced to update my beliefs in the direction of the elimination of the assumption. There’s too many internally inconsistent statements you continue to make, and you claim points of fact as evidence for positions that the datasets in question directly contradict. (Case in point: your continued use of Stipp’s text as a justification of the belief in mainstream medical acceptance of antagapics research. Additionally: Your notional rejection of the research done by geriontologists on the behaviors of aging as being relevant to the question of how aging occurs.)
What it means for someone to be rational doesn’t really have a good generic. There’s some intuition behind it that seems ok. But even that is simply a bad premise to use. It doesn’t apply to most humans. Assuming even an approximate degree of rationality is probably not a justified assumption. You seem to be making a point that you are willing to update to conclude that someone isn’t that rational. But whether you are willing to update or not isn’t terrible relevant. I can assign a low probability to the sun rising each day and every time it does rise update accordingly. This isn’t a good approach.
This suggests that we have different ideas of what constitutes plain language or have severely different communication styles. In such contexts it helps to just spell everything out explicitly. It does seem that there’s a similar problem going both ways. Look for example at how you seem to have interpreted my comment about not assuming rationality in others. In this context it seems that you seem to think that I parsed your statement as you insisting that everyone is rational. In contrast if you reread what I wrote you’ll see that I understood you perfectly and was objecting to the assumption as a starting assumption whether or not you would then update given evidence. The fact that as discussed in the other piece of this thread you totally missed my remark about fusion power suggests that there are some serious communication gaps in this conversation.
I’m not sure where we agreed to this claim. Resveratrol has features that do look very similar to those caloric restriction, but I don’t see anywhere we agreed to the claim that it is mimicking caloric restriction. Could you point to me where we agreed to that? I’ve just looked over the thread and I don’t see anywhere where I made that statement Since you’ve stated that you are likely not going to be continuing this conversation, can someone else who is reading this comment if they thought I said so and if so point where I did so? It is possible that I said something that is being interpreted that way. I’ve tried rereading the conversation to find that point and there’s nothing that seems to do that. But this could very well be due to the illusion of transparency in reading my own words.
So this seems again to be a failure to communicate between us. I didn’t reject that; their data is very important to how aging occurs. There data as it exists simply doesn’t rule out the possibility I outlined to you for very old ages That’s not the same thing at all.
Ok. This is actually a major issue: the outside view for human predictions about technology is that they are almost always overconfident when they make predictions. Humans in general have terrible overconfidence problems. Even if one had absolutely no specific issue with a given technology one would want to move any prediction in the direction of being less confident. We don’t need to go anywhere to justify a suspicion of overconfidence. For an individual who is marginally aware of human cognitive biases, that should be the default setting.
I don’t see how they rebut the hypothetical in question that you were responding to.
I said:
You then said:
I don’t see how this response makes sense. It would be a response if I had been talking about evolution having certain effects post the age of reproduction. But in my comment I was talking about probable maximal age. These are not the same thing. Indeed, you seem to be just trying to argue that they aren’t. That doesn’t address my statement at all.
Evolution has effects past the age of reproduction. Such as determining the probable maximal age of the organism. If there is group fitness to be contributed by an individual organism to the whole even without reproduction involved, then that individual organism’s survival is selected for—even though it itself reproducing isn’t. (This is an ancillary to kin-selection.) Average maximal lifespan of an organism is something that is determined genetically—and thus is a product of evolutionary history and ‘strategy’.
My point, however, wasn’t about evolution in this context—rather, it was about individual fitness metrics for maximal age of the human organism and, more specifically, the efforts of geriontology to uncover that data.
So yes, my statement directly addresses yours. I was specifically pointing out that the thing you were raising as a question is a field that is already well studied.
Your entire first paragraph is trivially true and doesn’t address the point at all. We’re in complete agreement that evolution can impact lifespans well beyond the reproductive age. I think that’s old enough that it is discussed at one point by Darwin although I don’t have the citation off the top of my head.
Ah, I see. now. So your point is simply that we know about this issue and that there’s been a lot of study related to it. I don’t see anywhere that would disagree with that claim. I don’t think I said anywhere in my remark anything indicating that the point about evolution pushing problems past the maximal age was at all obscure or original to me. Simply saying that it is a studied issue isn’t sufficient. You need something of the form “this is studied and has been found to be wrong”. But that’s not the case. And it is precisely not the case because there aren’t even now that many really old people. So the essential problem remains: for all we know there could be diseases that show up after around age 100 or older and we haven’t noticed them yet because the sample sizes are small. Moreover, basic evolutionary theory should give us a high prior for such diseases existing because evolution has no reason not make trade offs that benefit at some point before the approximate maximal age in the wild and would cause problems further down the line.