The Wikipedia quote is unsourced. (My main source of knowledge about Brown-Sequard’s involvement is Stipp’s book and a few other minor sources none of which mention the derisive elixir claim.)
I find myself confused as to how anyone could legitimately, upon reading what you claim to have read, hold forth the stance you claim to hold. I find it to be internally and externally inconsistent with the axiomatic assumption of rational honesty.
Is this intended as a personal attack or is there some other intention? This looks a bit like a series of boo lights apparently directed at me. But I’m already convinced that you and I have sufficiently different communication styles that I may be simply misreading this. What do you mean by “axiomatic assumptions of rational honesty” and how do you think what I have said is “internally” and “externally” inconsistent? Trying to interpret this, I’m getting something like “I don’t think an individual who is trying to be rational and have an intellectually honest dialogue could distort sources the way you are doing.” Is that what you mean to say? I can sort of see how you might get that if you thought I was using the Wikipedia article as a source. I just linked to it because although he was a major scientist around 1900, he’s fairly obscure now. I didn’t read it. Does that possibly explain the confusion here?
Average human lifespan (with medical care) is roughly analogous to the average human lifespan of the elephant, so that’s not exceptional. But I didn’t limit my statement to mammals but to endothermal.
Right, and I limited mine to mammals since it is a lot easier to look in that context, since the metabolisms are pretty different. Note by the way that your example of an elephant actually shows my point pretty well: elephants are an order of magnitude larger than humans yet humans have maximal lifespans slightly exceeding that of an elephant. The oldest elephants have been a bit over 80 years old. That’s in captivity with good veterinary care. But even without modern medical care humans have exceeded 110 years (albeit very rarely). There are two obvious issues with this (humans pay a lot more attention to old humans than old elephants, and we have a much larger supply of well-documented humans than we do of well-documented elephants), so it wouldn’t be unreasonable to suspect that with a large enough sample the elephants’ maximal age would look closer to the human maximal age. But then there’s the whole order of magnitude larger thing.
It does seem like great apes and humans in particular are already doing something else that has extended our maximal lifespan.
Attempts to project from the past into the future are doomed to failure when they do not take into account the present
Yes, which is why I asked for explicit descriptions of what data about the present other than there being a lot of people working on these issues that you were using. So what judgments are being made about our current knowledge of biochem that go into your conclusion?
Medicine is not like math. It is far more like engineering.
Sure. I’d agree with that. So let’s look at engineering examples where lots of resources have gone into things. Fusion power would be a really good example. There have been billions of dollars put into fusion power research in the last fifty years (see e.g. this for data on but one source of funding from the US government that has given billions of dollars to fusion research). Famously, fusion power is always about twenty-five years away. Sometimes problems are just tough. Sometimes the wall isn’t as sticky as you think it is.
… Increased knowledge in one field necessarily alters or amplifies the impact of knowledge in another. Are you familiar with the concept of scientific convergence?
Yes. But I still don’t see how this is in any way results in a conclusion that the statement in question “is asinine”. So I’m still confused.
The hypothesis was “maybe humans already have calorie-restricted metabolisms”. This has been falsified. This is not even remotely questionable. It’s false. We’ve got proof on the matter.
Please don’t tell me what my hypothesis was. The comment you were responding to was me raising the possibility that:
It could turn out that humans already do something to their cells that mimics most of the effects of reservatrol.
Notice that this statement says nothing at all about caloric restriction. The rest of your next paragraph is similarly irrelevant.
This is juvenile. There isn’t a single thing on the planet that everyone consumes besides air and probably water. After that, it’s isolation and elimination from diets. And that’s how science is done. C’mon now—let’s try to restrict our hypotheticals to things that are at least remotely plausible within the current world-line?
I don’t see what’s juvenile about this, and it would be slightly appreciated if you would make arguments that didn’t involve unnecessary personal attacks. Your observation about what is commonly consumed is quite accurate. So, if there’s something common in say the Western diet that reduces the effects of reservatrol or some similar compound, we might not even notice until we notice that the anti-aging compound is having much less of an effect than anticipated. And then you’d need to go and test those compounds. That sort of problem falls easily in the “remotely plausible” line of failures or in the “conceivable” world-lines you mentioned earlier.
It is a strong claim which emerges from a large series of independent small claims each with high probability individually. There is a saying; “three ifs ruin a plan”. You’d have to have far more than three “ifs” for my claim to turn out to be wrong.
That’s fine if your claim is “this is likely to work” or even “this is very likely to work.” But you’ve claimed that there’s no conceivable world-line where this isn’t working in fifty years and there hasn’t been societal collapse or something similar preventing the research.
At this point, I have to wonder if we are running into problems of inferential distance. This is why I tried to ask you to make your earlier chain of logic explicit. Reducing it to premises like the non-supernatural nature of biology and then saying etc. is not helpful for bridging inferential distance gaps.
Is this intended as a personal attack or is there some other intention? This looks a bit like a series of boo lights apparently directed at me.
If I say I am confused, then I mean that I am confused.
What do you mean by “axiomatic assumptions of rational honesty”
I mean that I take it as an axiomatic principle that my conversants are honest and rational actors until such time as they demonstrate otherwise.
and how do you think what I have said is “internally” and “externally” inconsistent?
Internally inconsistent means that the statements contradict themselves. Externally inconsistent means the statements contradict the known facts outside the statements.
You and I have the same datasets available for this conversation. You claim that you have read Stipp’s book and yet you still claim that there has been strong historical interest in antiagapics research within the mainstream community. Stipps book contradicts this claim.
This is an internally inconsistent claim. You then go on to make many externally inconsistent statements such as claiming that the question of whether humans operate on calorie-restricted metabolism is yet at question, or the claim that geriontology’s studies of human longevity have sufficiently little to do with determining the maximal human lifespan that you are confused by why I would even bring it up.
These are all points that tend to lead towards the conclusion of dishonesty or irrationality on your part. I’m not trying to claim that I have made that conclusion, just that I am confused as to how it is possible that you are not being dishonest or irrational—because I am continuing to operate on the axiomatic assumption that you are in fact honest and rational.
So, if there’s something common in say the Western diet that reduces the effects of reservatrol or some similar compound, we might not even notice until we notice that the anti-aging compound is having much less of an effect than anticipated. And then you’d need to go and test those compounds. That sort of problem falls easily in the “remotely plausible” line of failures or in the “conceivable” world-lines you mentioned earlier.
No, it doesn’t. No pharmaceutical will receive widespread adoption until such time as it has been rigorously studied for how it behaves and what its contraindications and/or interactions are. That includes diets. These are all things that are normally controlled for. There is nothing “remotely plausible” about your proposed scenario: the entire pharmaceutical approvals process would have to be abandoned for it to occur.
That’s fine if your claim is “this is likely to work” or even “this is very likely to work.” But you’ve claimed that there’s no conceivable world-line where this isn’t working in fifty years and there hasn’t been societal collapse or something similar preventing the research.
There isn’t any one single this, is my point. Science and medicine are converging significantly. There is now a concerted effort to solving this particular problem. The scale of extension is, compared to what is conceivable possible, very minor.
My claim is that there are so many things which can be stated to be very likely to work that the idea of all of them failing would require a total overhaul of several fundamental models that I hold to be true based on their history of providing valid conclusions.
I don’t believe that the problem we have here is one of inferential distance. I am very familiar with what that problem looks like. What we have instead is the fact that somehow we are both operating with the same sets of available data yet reaching different conclusions.
Aumann’s Agreement Theorem has something to say about that—and with that I suppose I am now verging into the territory of claiming dishonesty/irrationality on your part. (I admit it could conceivable be on my part as well, but I have as yet no indications towards inconsistency in any of my statements aside from the assumption of rational-honesty on your part.)
I mean that I take it as an axiomatic principle that my conversants are honest and rational actors until such time as they demonstrate otherwise.
I don’t know precisely what you mean by rational in this context. Given your invocation of Aumann’s theorem below, I presume you mean something like “close to perfect Bayesians.” This is a really bad idea. Humans can try to be more rational, but they are far from rational. This is not only a bad assumption about people around you, it is a bad assumption about youself. Even weaker assumptions of rationality for people around you are probably a bad idea. Most humans you will talk to will have only some passing desire to be rational, and even those who have moe desire for rationality will know very little about things like the illusion of transparency, the Dunning-Kruger effect, the conjunction fallacy, underestimating inferrential distance, or a host of other cognitive biases that even smart humans commonly fall pray to. Assuming rationality even in a weak sense is a quick way to have an unproductive conversation. Weak assumptions of rationality might make sense on LW but even here they need to be very weak. I think you are misinterpreting a number of my comments and have other issues with what I’ve brought up, so your apparent contradiction with your axiom doesn’t exist in this context, but your axiom is probably a pretty bad idea anyways. (I can even point to an example where I explicitly engaged in a wretched conjunction fallacy on another thread here.)
You then go on to make many externally inconsistent statements such as claiming that the question of whether humans operate on calorie-restricted metabolism is yet at question, or the claim that geriontology’s studies of human longevity have sufficiently little to do with determining the maximal human lifespan that you are confused by why I would even bring it up.
I didn’t do that. I literally in the post you are replying to pointed out explicitly what I said that you intrepreted that way and how it wasn’t that statement.
The hypothesis was “maybe humans already have calorie-restricted metabolisms”. This has been falsified. This is not even remotely questionable. It’s false. We’ve got proof on the matter.
Please don’t tell me what my hypothesis was. The comment you were responding to was me raising the possibility that:
It could turn out that humans already do something to their cells that mimics most of the effects of reservatrol.
Notice that this statement says nothing at all about caloric restriction.
I’m not sure what to do at this point other than just repeat the above line but that seems unproductive. So instead I’ll engage in some cheap praeteritio.
No pharmaceutical will receive widespread adoption until such time as it has been rigorously studied for how it behaves and what its contraindications and/or interactions are. That includes diets. These are all things that are normally controlled for. There is nothing “remotely plausible” about your proposed scenario: the entire pharmaceutical approvals process would have to be abandoned for it to occur.
There have been multiple examples of drugs having serious side effects that have not been caught in testing. Moreover, this doesn’t need to occur for this to be a problem. All that is required is the weaker result of the drugs turning out to simply not work on humans nearly as well due to some common thing in our diet. Since testing these sorts of things in humans requires extremely long-times to see if there’s an empirical effect (on the order of magnitude which humans age at) one could easily not catch the problem for some time.
What we have instead is the fact that somehow we are both operating with the same sets of available data yet reaching different conclusions.
Are we operating on the same data sets? Certainly they overlap a lot, but it looks like I’m operating with a much stronger attitude about general problems of overconfidence.
Aumann’s Agreement Theorem has something to say about that
I generally don’t like the invocations of Aumann’s theorem that are so common LW. They aren’t mathematically justified. The theorem applies to perfect Bayesians after they’ve exchanged all their data. The general idea that somewhat rational people who disagree should update in the direction of each other has valid arguments for it, but Aumann is not a very strong one.
But let’s pretend for a minute that it is a good argument or accept (since we both seem to agree on it) that people trying to be rational should probably update their confidence levels towards each other. If as you claim, you are assuming that I’m rational, have you updated in my direction? If not, why not? If so, by how much?
I thought it was that they should reach the same conclusions. Both might increase or decrease their confidence in a proposition, in which case only one at most would be moving in the direction the other began at.
Right, this is another reasons that Aumann is not a good argument for what people use it for here. One can actually construct situations where they might radically alter their estimates in either direction depending on their priors and estimates. Most of the time it is used here people seem to imply that people should update in each others directions. There are good reasons for that that aren’t Aumann.
Related to this, I don’t know of anything in the literature that precisely talks about almost-Bayesians in any useful way, but I strongly suspect that if it can be made more precise, one will be able to show that under reasonable assumptions more often than not the right thing will be to update towards each other. This follows from something like the averaging trick for estimating statistics. But I don’t know how to make this more precise.
I wonder if one could take Bayesian learning systems with different sets of training data or something similar and then see how they should update on the full set. It might be interesting to do this in a sort of Monte Carlo situation to see for reasonable distributions empirically how Bayesians should generally move.
That seems like a very valid point. Humans keep all sorts of reference classes and prior experience in the back of their minds. Much of it probably is so far back that making it conscious or even realizing which prior data is informing the conclusions could be quite difficult. At some level when people speak of intuition about things like math problems they are talking about precisely this sort of thing.
One can actually construct situations where they might radically alter their estimates in either direction depending on their priors and estimates.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
I invoked it in our conversation because you keep insisting that you’ve seen the same materials and generally agree with me on all points of the possibility of longevity augmentation and the nature of physics, chemistry, and biology.
That you and I are not “perfect Bayesians” is irrelevant to the condition of Aumann’s Agreement being applicable to our situation. The distance we have from such ‘perfection’ should be mappable to the distance we have from agreeing with the ‘ideally rational analysis’ of our strongly overlapping set of priors.
The trouble is, we seem to be diametrically opposed in this conversation. Which means, necessarily, that one of us has strayed rather far from that ideal. Which is a needlessly-complicated way of saying exactly what I’d already said.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
Huh? No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
I invoked it in our conversation because you keep insisting that you’ve seen the same materials and generally agree with me on all points of the possibility of longevity augmentation and the nature of physics, chemistry, and biology.
Well, we agree on the major things that pretty much everyone LW would agree on. We seem to agree on slightly more than that. But on other issues, especially meta issues like “how good in general are humans at estimating technological progress” we do seem to disagree. We also apparently disagree on how likely it is for a large number of scientists and engineers to focus on an area and not succeed in that area. The fusion power example I gave earlier seems relevant in that context.
That you and I are not “perfect Bayesians” is irrelevant to the condition of Aumann’s Agreement being applicable to our situation. The distance we have from such ‘perfection’ should be mappable to the distance we have from agreeing with the ‘ideally rational analysis’ of our strongly overlapping set of priors.
This is not obvious to me. I think it is probably true for near-perfect Bayesians by some reasonable metric, but I’m not aware of a metric to measure how Bayesian non-Bayesians are. Moreover, there are many theorems in many areas where if one weakens the premises a small amount the results of the theorem don’t just fail but fail catastrophically. Without a better understanding of the underlying material I don’t think I can judge whether this is correct. My impression (very much not my area of math) is that very little has been done trying to look at how imperfect Bayesians should/will behave.
The trouble is, we seem to be diametrically opposed in this conversation.
That seems like an extreme phrasing. The disagreement in question is whether in the next fifty years substantial life-extension is likely or that it is so likely that all conceivable world-lines from here which don’t result in civilization collapsing will result in substantial life extensions in the next fifty years. Diametrically opposed I think would be something like your view v. the view that life-extension is definitely not going to happen in the next fifty years barring something like aliens coming down and giving us advanced technology or some sort of Friendly AGI going foom.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
Huh? No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
If and only if their priors do not match one another. That’s the whole point of Aumann’s Agreement Theorem. For which there are proofs.
But on other issues, especially meta issues like “how good in general are humans at estimating technological progress” we do seem to disagree.
Not significantly. It is my belief that humans are especially poor at this.
We also apparently disagree on how likely it is for a large number of scientists and engineers to focus on an area and not succeed in that area. The fusion power example I gave earlier seems relevant in that context.
I honestly do not recall having seen it, but these threads have gotten rather larger than my available attention span and/or recall capacity in general anyhow. That being said, the fusion power problem is actually a very good example of this. The overwhelming majority of the endeavor in that field has gone into Tokamak-style fusion power generation. There is broad acceptance of the idea that once the materials science has reached a specific state, Tokamak reactors will achieve “power-out” state. There is far less agreement in fusion energy research on just about every other approach, and those approaches have received marginal attention.
Contrastingly, there are a wide array of avaialable solutions to practical antiagapics, and many of them have had basic demonstrations of viability of the underlaying ‘dependent’ technologies. (Caloric restriction is documented, we have genetic samples of populations of various longevities to analyze for potential pharmaceuticals, we can already clone tissues and have a tool that allows the assembly of tissues into organs, there is a steady and progressive history of ever-more-successful non-biological implants, etc., etc., etc..). This renders antiagapics into a significantly different sort of endeavor than fusion. In fusion research, the “throw everything on the wall and see what sticks” approach simply hasn’t been used. And that is what I have espoused as the source of my confidence in my assertion.
Diametrically opposed I think would be something like your view v. the view that life-extension is definitely not going to happen in the next fifty years barring something like aliens coming down and giving us advanced technology or some sort of Friendly AGI going foom.
We’re in a binary state here. I assert P(X)=~1 is true, you assert this is false. These positions are diametrical / binary opposites.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
Exchanging relevant information is exceedingly difficult when both parties can relate the same statements of fact and cite the same materials as necessarily resulting in opposing conclusions.
I don’t think this conversation’s going to go anywhere further at this point, by the way, so this is going to be the last comment I make in this particular thread.
No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
If and only if their priors do not match one another. That’s the whole point of Aumann’s Agreement Theorem. For which there are proofs.
I’m not sure what you mean here. For any epsilon >0 I can start with two Bayesians who share the same priors and have different estimates for different statements (before their probabilities become common knowledge) and a hypothesis H such that that before the updating P(H) < eps, for both and after the updating P(H) > 1- eps for both. I’m not at all sure why you think I need two sets of priors to pull this off. Nothing in this contradicts Aumann’s theorem.
Also, you are wrong about Aumann’s theorem. It isn’t an iff, the implication goes only one way. You can start off with different Bayesians who have different priors and who after updating get the same posteriors. Aumann simply is talking about the case where they have the same priors. It says nothing about what happens if they have different priors. In fact, there are weaker theorems about limiting behavior in certain contexts where the priors disagree but as long as they aren’t too pathological you can get as the number of observations increases they start to agree.
A toy example that may help here:
Assume that there is a coin. Alice and Bob have different priors about this coin. Alice assigns a 25% chance that the coin is fair, a 20% that the coin always turns up heads, a 25% chance that the coin always turns up tailss and a 30% chance that the coin turns up heads 2/3rds of the time. Bob has a 20% chance that the coin is fair, a 25% chance that the coin always turns up heads, a 20% chance that the coin always turns up tails, and a 30% chance that the coin turns up heads 2/3rds of the time. Now, first consider what happens if on the first flip the coin turns up heads. Bob and Alice will now assign zero probability to the possibility that the coin always turns up tails. They now agree on that possibility. Furthermore, assume they keep flipping the coin and observing the results. Then it isn’t that hard to see that as long as the coin actually is one of the four options in the limiting situation Alice and Bob will agree. And you can explicitly state with what probability you should expect any given degree of disagreement between them.
I honestly do not recall having seen it
This suggests to me that you may not be paying that much attention what others (especially I) have written in reply to you. It may therefore may make sense to go back and reread the thread when you have time.
That being said, the fusion power problem is actually a very good example of this. The overwhelming majority of the endeavor in that field has gone into Tokamak-style fusion power generation
This seems to be a valid point. The method of approach doesn’t to some extent look like the approach to anti-aging research in so far as most of the research is focusing on a single method. But I’m not convinced that this argument is that strong. There’s also been a fair bit of research into laser confinement fusion for example. And before it became apparent that they could not be efficient enough to provide power, Farnsworth style fusors were also researched heavily. Industry still researches scaling and making Farnsworth fusors more efficient because they make very nice portable neutron generators that one can turn on and off. So while the majority of the research funding has been going to toroidal magnetic confinement there’s been a very large amount of money put into other types. It is only in the context of percentage of the total that the amount put in looks small.
We’re in a binary state here. I assert P(X)=~1 is true, you assert this is false. These positions are diametrical / binary opposites.
By this definition any two people who disagree about a probability estimate our diametric. This seems like not a good definition if one wants to capture common intuition of the terms. Certainly in contrast for example I don’t think that if you told someone in the general public that “this person thinks that life extension is likely in the next fifty years and this other person considers it to be a near certainity” that they would describe this as diametric opposition.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
Exchanging relevant information is exceedingly difficult when both parties can relate the same statements of fact and cite the same materials as necessarily resulting in opposing conclusions.
Well, yes. That’s part of the problem. Humans have massive amounts of information that they’ve moved into their background processing. I have pretty decent intuition for certain classes of mathematical problems. But that’s from accumulated experience. I can pretty reliably make conjectures about those classes of problems. But I can’t point explicitly to what is causing me to do so. It is possible that we have differing background sets of data that are impacting our processing at a base level.
I don’t know precisely what you mean by rational in this context. Given your invocation of Aumann’s theorem below, I presume you mean something like “close to perfect Bayesians.”
I have no such specially-tailored definition. The generics are sufficient for my context. I strongly dislike using specially-tailored definitions of widely used terms without first openly declaring that this is being done. It’s an “underhanded” sort of thing to do in conversation, and counterproductive.
This is a really bad idea. Humans can try to be more rational, but they are far from rational.
A. Far from ideally rational. “Rational” is not a binary state; it is a relative one. One can be, as you note, “more” or “less” rational.
B. You are badly failing to properly parse my statement. I make the axiomatic assumption of rationality and honesty in those I interact with until such time as they give me reason to believe otherwise of them.
I think you are misinterpreting a number of my comments and have other issues with what I’ve brought up,
I can see why you’d think I’m misinterpreting you. The trouble is, I don’t believe I am, and in each instance where you’ve raised an objection or I have, I have provided the reasoning and motivation for that instance (insofar as I can recall at this instant.) On more than one occassion, contrastingly, I have made plain-language statements which you have failed to parse or have taken to mean very strongly different things than they otherwise did. You routinely introduce new and unnecessary assumptions into my statements, thereby altering their meanings, and you have left out elements of my statements, also thereby altering their meanings. One such example of this is my relating to you of my axiomatic assumption of rationality and honesty, which I clearly related as being conditional to contra-evidence. You proceded to lecture me on how bad a notion it is to not allow for irrationality in others. This can only mean that you did not, in fact, properly parse my statement… despite the clear and plain language I used to make it.
Please don’t tell me what my hypothesis was. The comment you were responding to was me raising the possibility that:
It could turn out that humans already do something to their cells that mimics most of the effects of reservatrol.
Notice that this statement says nothing at all about caloric restriction.
Not directly, but resveratrol itself is mimicing the effects of caloric restriction. Once again, this is something we had both already agreed upon being the case for this dialogue. So yes, I’m telling you what your hypothesis is. And I am right to do so—because I am using your own statements in an internally consistent manner to themselves.
This, by the way, represents another instance of you making an internally inconsistent statement.
Are we operating on the same data sets? Certainly they overlap a lot, but it looks like I’m operating with a much stronger attitude about general problems of overconfidence.
We are. You, however, are introducing the assumption—unecessary and unwarranted based on the currently available datasets for this dialogue—that overconfidence is yet relevant to the conversation.
We simply haven’t gotten anywhere that justifies the belief or assumption of overconfidence.
If as you claim, you are assuming that I’m rational, have you updated in my direction? If not, why not? If so, by how much?
Unfortunately, I’m being forced to update my beliefs in the direction of the elimination of the assumption. There’s too many internally inconsistent statements you continue to make, and you claim points of fact as evidence for positions that the datasets in question directly contradict. (Case in point: your continued use of Stipp’s text as a justification of the belief in mainstream medical acceptance of antagapics research. Additionally: Your notional rejection of the research done by geriontologists on the behaviors of aging as being relevant to the question of how aging occurs.)
What it means for someone to be rational doesn’t really have a good generic. There’s some intuition behind it that seems ok. But even that is simply a bad premise to use. It doesn’t apply to most humans. Assuming even an approximate degree of rationality is probably not a justified assumption. You seem to be making a point that you are willing to update to conclude that someone isn’t that rational. But whether you are willing to update or not isn’t terrible relevant. I can assign a low probability to the sun rising each day and every time it does rise update accordingly. This isn’t a good approach.
On more than one occassion, contrastingly, I have made plain-language statements which you have failed to parse or have taken to mean very strongly different things than they otherwise did.
This suggests that we have different ideas of what constitutes plain language or have severely different communication styles. In such contexts it helps to just spell everything out explicitly. It does seem that there’s a similar problem going both ways. Look for example at how you seem to have interpreted my comment about not assuming rationality in others. In this context it seems that you seem to think that I parsed your statement as you insisting that everyone is rational. In contrast if you reread what I wrote you’ll see that I understood you perfectly and was objecting to the assumption as a starting assumption whether or not you would then update given evidence. The fact that as discussed in the other piece of this thread you totally missed my remark about fusion power suggests that there are some serious communication gaps in this conversation.
Not directly, but resveratrol itself is mimicing the effects of caloric restriction. Once again, this is something we had both already agreed upon being the case for this dialogue.
I’m not sure where we agreed to this claim. Resveratrol has features that do look very similar to those caloric restriction, but I don’t see anywhere we agreed to the claim that it is mimicking caloric restriction. Could you point to me where we agreed to that? I’ve just looked over the thread and I don’t see anywhere where I made that statement Since you’ve stated that you are likely not going to be continuing this conversation, can someone else who is reading this comment if they thought I said so and if so point where I did so? It is possible that I said something that is being interpreted that way. I’ve tried rereading the conversation to find that point and there’s nothing that seems to do that. But this could very well be due to the illusion of transparency in reading my own words.
Your notional rejection of the research done by geriontologists on the behaviors of aging as being relevant to the question of how aging occurs.
So this seems again to be a failure to communicate between us. I didn’t reject that; their data is very important to how aging occurs. There data as it exists simply doesn’t rule out the possibility I outlined to you for very old ages That’s not the same thing at all.
We are. You, however, are introducing the assumption—unecessary and unwarranted based on the currently available datasets for this dialogue—that overconfidence is yet relevant to the conversation.
We simply haven’t gotten anywhere that justifies the belief or assumption of overconfidence.
Ok. This is actually a major issue: the outside view for human predictions about technology is that they are almost always overconfident when they make predictions. Humans in general have terrible overconfidence problems. Even if one had absolutely no specific issue with a given technology one would want to move any prediction in the direction of being less confident. We don’t need to go anywhere to justify a suspicion of overconfidence. For an individual who is marginally aware of human cognitive biases, that should be the default setting.
The Wikipedia quote is unsourced. (My main source of knowledge about Brown-Sequard’s involvement is Stipp’s book and a few other minor sources none of which mention the derisive elixir claim.)
Is this intended as a personal attack or is there some other intention? This looks a bit like a series of boo lights apparently directed at me. But I’m already convinced that you and I have sufficiently different communication styles that I may be simply misreading this. What do you mean by “axiomatic assumptions of rational honesty” and how do you think what I have said is “internally” and “externally” inconsistent? Trying to interpret this, I’m getting something like “I don’t think an individual who is trying to be rational and have an intellectually honest dialogue could distort sources the way you are doing.” Is that what you mean to say? I can sort of see how you might get that if you thought I was using the Wikipedia article as a source. I just linked to it because although he was a major scientist around 1900, he’s fairly obscure now. I didn’t read it. Does that possibly explain the confusion here?
Right, and I limited mine to mammals since it is a lot easier to look in that context, since the metabolisms are pretty different. Note by the way that your example of an elephant actually shows my point pretty well: elephants are an order of magnitude larger than humans yet humans have maximal lifespans slightly exceeding that of an elephant. The oldest elephants have been a bit over 80 years old. That’s in captivity with good veterinary care. But even without modern medical care humans have exceeded 110 years (albeit very rarely). There are two obvious issues with this (humans pay a lot more attention to old humans than old elephants, and we have a much larger supply of well-documented humans than we do of well-documented elephants), so it wouldn’t be unreasonable to suspect that with a large enough sample the elephants’ maximal age would look closer to the human maximal age. But then there’s the whole order of magnitude larger thing. It does seem like great apes and humans in particular are already doing something else that has extended our maximal lifespan.
Sure. I’d agree with that. So let’s look at engineering examples where lots of resources have gone into things. Fusion power would be a really good example. There have been billions of dollars put into fusion power research in the last fifty years (see e.g. this for data on but one source of funding from the US government that has given billions of dollars to fusion research). Famously, fusion power is always about twenty-five years away. Sometimes problems are just tough. Sometimes the wall isn’t as sticky as you think it is.
Yes. But I still don’t see how this is in any way results in a conclusion that the statement in question “is asinine”. So I’m still confused.
Please don’t tell me what my hypothesis was. The comment you were responding to was me raising the possibility that:
Notice that this statement says nothing at all about caloric restriction. The rest of your next paragraph is similarly irrelevant.
I don’t see what’s juvenile about this, and it would be slightly appreciated if you would make arguments that didn’t involve unnecessary personal attacks. Your observation about what is commonly consumed is quite accurate. So, if there’s something common in say the Western diet that reduces the effects of reservatrol or some similar compound, we might not even notice until we notice that the anti-aging compound is having much less of an effect than anticipated. And then you’d need to go and test those compounds. That sort of problem falls easily in the “remotely plausible” line of failures or in the “conceivable” world-lines you mentioned earlier.
That’s fine if your claim is “this is likely to work” or even “this is very likely to work.” But you’ve claimed that there’s no conceivable world-line where this isn’t working in fifty years and there hasn’t been societal collapse or something similar preventing the research.
At this point, I have to wonder if we are running into problems of inferential distance. This is why I tried to ask you to make your earlier chain of logic explicit. Reducing it to premises like the non-supernatural nature of biology and then saying etc. is not helpful for bridging inferential distance gaps.
If I say I am confused, then I mean that I am confused.
I mean that I take it as an axiomatic principle that my conversants are honest and rational actors until such time as they demonstrate otherwise.
Internally inconsistent means that the statements contradict themselves. Externally inconsistent means the statements contradict the known facts outside the statements.
You and I have the same datasets available for this conversation. You claim that you have read Stipp’s book and yet you still claim that there has been strong historical interest in antiagapics research within the mainstream community. Stipps book contradicts this claim.
This is an internally inconsistent claim. You then go on to make many externally inconsistent statements such as claiming that the question of whether humans operate on calorie-restricted metabolism is yet at question, or the claim that geriontology’s studies of human longevity have sufficiently little to do with determining the maximal human lifespan that you are confused by why I would even bring it up.
These are all points that tend to lead towards the conclusion of dishonesty or irrationality on your part. I’m not trying to claim that I have made that conclusion, just that I am confused as to how it is possible that you are not being dishonest or irrational—because I am continuing to operate on the axiomatic assumption that you are in fact honest and rational.
No, it doesn’t. No pharmaceutical will receive widespread adoption until such time as it has been rigorously studied for how it behaves and what its contraindications and/or interactions are. That includes diets. These are all things that are normally controlled for. There is nothing “remotely plausible” about your proposed scenario: the entire pharmaceutical approvals process would have to be abandoned for it to occur.
There isn’t any one single this, is my point. Science and medicine are converging significantly. There is now a concerted effort to solving this particular problem. The scale of extension is, compared to what is conceivable possible, very minor.
My claim is that there are so many things which can be stated to be very likely to work that the idea of all of them failing would require a total overhaul of several fundamental models that I hold to be true based on their history of providing valid conclusions.
I don’t believe that the problem we have here is one of inferential distance. I am very familiar with what that problem looks like. What we have instead is the fact that somehow we are both operating with the same sets of available data yet reaching different conclusions.
Aumann’s Agreement Theorem has something to say about that—and with that I suppose I am now verging into the territory of claiming dishonesty/irrationality on your part. (I admit it could conceivable be on my part as well, but I have as yet no indications towards inconsistency in any of my statements aside from the assumption of rational-honesty on your part.)
I don’t know precisely what you mean by rational in this context. Given your invocation of Aumann’s theorem below, I presume you mean something like “close to perfect Bayesians.” This is a really bad idea. Humans can try to be more rational, but they are far from rational. This is not only a bad assumption about people around you, it is a bad assumption about youself. Even weaker assumptions of rationality for people around you are probably a bad idea. Most humans you will talk to will have only some passing desire to be rational, and even those who have moe desire for rationality will know very little about things like the illusion of transparency, the Dunning-Kruger effect, the conjunction fallacy, underestimating inferrential distance, or a host of other cognitive biases that even smart humans commonly fall pray to. Assuming rationality even in a weak sense is a quick way to have an unproductive conversation. Weak assumptions of rationality might make sense on LW but even here they need to be very weak. I think you are misinterpreting a number of my comments and have other issues with what I’ve brought up, so your apparent contradiction with your axiom doesn’t exist in this context, but your axiom is probably a pretty bad idea anyways. (I can even point to an example where I explicitly engaged in a wretched conjunction fallacy on another thread here.)
I didn’t do that. I literally in the post you are replying to pointed out explicitly what I said that you intrepreted that way and how it wasn’t that statement.
I’m not sure what to do at this point other than just repeat the above line but that seems unproductive. So instead I’ll engage in some cheap praeteritio.
Are we operating on the same data sets? Certainly they overlap a lot, but it looks like I’m operating with a much stronger attitude about general problems of overconfidence.
I generally don’t like the invocations of Aumann’s theorem that are so common LW. They aren’t mathematically justified. The theorem applies to perfect Bayesians after they’ve exchanged all their data. The general idea that somewhat rational people who disagree should update in the direction of each other has valid arguments for it, but Aumann is not a very strong one.
But let’s pretend for a minute that it is a good argument or accept (since we both seem to agree on it) that people trying to be rational should probably update their confidence levels towards each other. If as you claim, you are assuming that I’m rational, have you updated in my direction? If not, why not? If so, by how much?
I thought it was that they should reach the same conclusions. Both might increase or decrease their confidence in a proposition, in which case only one at most would be moving in the direction the other began at.
Right, this is another reasons that Aumann is not a good argument for what people use it for here. One can actually construct situations where they might radically alter their estimates in either direction depending on their priors and estimates. Most of the time it is used here people seem to imply that people should update in each others directions. There are good reasons for that that aren’t Aumann.
Related to this, I don’t know of anything in the literature that precisely talks about almost-Bayesians in any useful way, but I strongly suspect that if it can be made more precise, one will be able to show that under reasonable assumptions more often than not the right thing will be to update towards each other. This follows from something like the averaging trick for estimating statistics. But I don’t know how to make this more precise.
I wonder if one could take Bayesian learning systems with different sets of training data or something similar and then see how they should update on the full set. It might be interesting to do this in a sort of Monte Carlo situation to see for reasonable distributions empirically how Bayesians should generally move.
In my opinion, that humans aren’t Bayesians is much less of a problem than that humans can’t share all their information.
That seems like a very valid point. Humans keep all sorts of reference classes and prior experience in the back of their minds. Much of it probably is so far back that making it conscious or even realizing which prior data is informing the conclusions could be quite difficult. At some level when people speak of intuition about things like math problems they are talking about precisely this sort of thing.
In which case it wouldn’t be proper to invoke Aumann’s Agreement; the two parties would be working with non-identical priors.
I invoked it in our conversation because you keep insisting that you’ve seen the same materials and generally agree with me on all points of the possibility of longevity augmentation and the nature of physics, chemistry, and biology.
That you and I are not “perfect Bayesians” is irrelevant to the condition of Aumann’s Agreement being applicable to our situation. The distance we have from such ‘perfection’ should be mappable to the distance we have from agreeing with the ‘ideally rational analysis’ of our strongly overlapping set of priors.
The trouble is, we seem to be diametrically opposed in this conversation. Which means, necessarily, that one of us has strayed rather far from that ideal. Which is a needlessly-complicated way of saying exactly what I’d already said.
Huh? No. I mean you can construct prior sets that will result in them moving radically in one direction or another. Exercise: For any epsilon>0, there is a set of priors and hypothesis space containing a hypothesis H such that one can construct two Bayesians who start off with P(H)< eps, and after updating have P(H)>1-eps.
Well, we agree on the major things that pretty much everyone LW would agree on. We seem to agree on slightly more than that. But on other issues, especially meta issues like “how good in general are humans at estimating technological progress” we do seem to disagree. We also apparently disagree on how likely it is for a large number of scientists and engineers to focus on an area and not succeed in that area. The fusion power example I gave earlier seems relevant in that context.
This is not obvious to me. I think it is probably true for near-perfect Bayesians by some reasonable metric, but I’m not aware of a metric to measure how Bayesian non-Bayesians are. Moreover, there are many theorems in many areas where if one weakens the premises a small amount the results of the theorem don’t just fail but fail catastrophically. Without a better understanding of the underlying material I don’t think I can judge whether this is correct. My impression (very much not my area of math) is that very little has been done trying to look at how imperfect Bayesians should/will behave.
That seems like an extreme phrasing. The disagreement in question is whether in the next fifty years substantial life-extension is likely or that it is so likely that all conceivable world-lines from here which don’t result in civilization collapsing will result in substantial life extensions in the next fifty years. Diametrically opposed I think would be something like your view v. the view that life-extension is definitely not going to happen in the next fifty years barring something like aliens coming down and giving us advanced technology or some sort of Friendly AGI going foom.
See also lessdazed’s remark about the difficulty humans have of exchanging all relevant information.
If and only if their priors do not match one another. That’s the whole point of Aumann’s Agreement Theorem. For which there are proofs.
Not significantly. It is my belief that humans are especially poor at this.
I honestly do not recall having seen it, but these threads have gotten rather larger than my available attention span and/or recall capacity in general anyhow. That being said, the fusion power problem is actually a very good example of this. The overwhelming majority of the endeavor in that field has gone into Tokamak-style fusion power generation. There is broad acceptance of the idea that once the materials science has reached a specific state, Tokamak reactors will achieve “power-out” state. There is far less agreement in fusion energy research on just about every other approach, and those approaches have received marginal attention.
Contrastingly, there are a wide array of avaialable solutions to practical antiagapics, and many of them have had basic demonstrations of viability of the underlaying ‘dependent’ technologies. (Caloric restriction is documented, we have genetic samples of populations of various longevities to analyze for potential pharmaceuticals, we can already clone tissues and have a tool that allows the assembly of tissues into organs, there is a steady and progressive history of ever-more-successful non-biological implants, etc., etc., etc..). This renders antiagapics into a significantly different sort of endeavor than fusion. In fusion research, the “throw everything on the wall and see what sticks” approach simply hasn’t been used. And that is what I have espoused as the source of my confidence in my assertion.
We’re in a binary state here. I assert P(X)=~1 is true, you assert this is false. These positions are diametrical / binary opposites.
Exchanging relevant information is exceedingly difficult when both parties can relate the same statements of fact and cite the same materials as necessarily resulting in opposing conclusions.
I don’t think this conversation’s going to go anywhere further at this point, by the way, so this is going to be the last comment I make in this particular thread.
I’m not sure what you mean here. For any epsilon >0 I can start with two Bayesians who share the same priors and have different estimates for different statements (before their probabilities become common knowledge) and a hypothesis H such that that before the updating P(H) < eps, for both and after the updating P(H) > 1- eps for both. I’m not at all sure why you think I need two sets of priors to pull this off. Nothing in this contradicts Aumann’s theorem.
Also, you are wrong about Aumann’s theorem. It isn’t an iff, the implication goes only one way. You can start off with different Bayesians who have different priors and who after updating get the same posteriors. Aumann simply is talking about the case where they have the same priors. It says nothing about what happens if they have different priors. In fact, there are weaker theorems about limiting behavior in certain contexts where the priors disagree but as long as they aren’t too pathological you can get as the number of observations increases they start to agree.
A toy example that may help here:
Assume that there is a coin. Alice and Bob have different priors about this coin. Alice assigns a 25% chance that the coin is fair, a 20% that the coin always turns up heads, a 25% chance that the coin always turns up tailss and a 30% chance that the coin turns up heads 2/3rds of the time. Bob has a 20% chance that the coin is fair, a 25% chance that the coin always turns up heads, a 20% chance that the coin always turns up tails, and a 30% chance that the coin turns up heads 2/3rds of the time. Now, first consider what happens if on the first flip the coin turns up heads. Bob and Alice will now assign zero probability to the possibility that the coin always turns up tails. They now agree on that possibility. Furthermore, assume they keep flipping the coin and observing the results. Then it isn’t that hard to see that as long as the coin actually is one of the four options in the limiting situation Alice and Bob will agree. And you can explicitly state with what probability you should expect any given degree of disagreement between them.
This suggests to me that you may not be paying that much attention what others (especially I) have written in reply to you. It may therefore may make sense to go back and reread the thread when you have time.
This seems to be a valid point. The method of approach doesn’t to some extent look like the approach to anti-aging research in so far as most of the research is focusing on a single method. But I’m not convinced that this argument is that strong. There’s also been a fair bit of research into laser confinement fusion for example. And before it became apparent that they could not be efficient enough to provide power, Farnsworth style fusors were also researched heavily. Industry still researches scaling and making Farnsworth fusors more efficient because they make very nice portable neutron generators that one can turn on and off. So while the majority of the research funding has been going to toroidal magnetic confinement there’s been a very large amount of money put into other types. It is only in the context of percentage of the total that the amount put in looks small.
By this definition any two people who disagree about a probability estimate our diametric. This seems like not a good definition if one wants to capture common intuition of the terms. Certainly in contrast for example I don’t think that if you told someone in the general public that “this person thinks that life extension is likely in the next fifty years and this other person considers it to be a near certainity” that they would describe this as diametric opposition.
Well, yes. That’s part of the problem. Humans have massive amounts of information that they’ve moved into their background processing. I have pretty decent intuition for certain classes of mathematical problems. But that’s from accumulated experience. I can pretty reliably make conjectures about those classes of problems. But I can’t point explicitly to what is causing me to do so. It is possible that we have differing background sets of data that are impacting our processing at a base level.
I have no such specially-tailored definition. The generics are sufficient for my context. I strongly dislike using specially-tailored definitions of widely used terms without first openly declaring that this is being done. It’s an “underhanded” sort of thing to do in conversation, and counterproductive.
A. Far from ideally rational. “Rational” is not a binary state; it is a relative one. One can be, as you note, “more” or “less” rational.
B. You are badly failing to properly parse my statement. I make the axiomatic assumption of rationality and honesty in those I interact with until such time as they give me reason to believe otherwise of them.
I can see why you’d think I’m misinterpreting you. The trouble is, I don’t believe I am, and in each instance where you’ve raised an objection or I have, I have provided the reasoning and motivation for that instance (insofar as I can recall at this instant.) On more than one occassion, contrastingly, I have made plain-language statements which you have failed to parse or have taken to mean very strongly different things than they otherwise did. You routinely introduce new and unnecessary assumptions into my statements, thereby altering their meanings, and you have left out elements of my statements, also thereby altering their meanings. One such example of this is my relating to you of my axiomatic assumption of rationality and honesty, which I clearly related as being conditional to contra-evidence. You proceded to lecture me on how bad a notion it is to not allow for irrationality in others. This can only mean that you did not, in fact, properly parse my statement… despite the clear and plain language I used to make it.
Not directly, but resveratrol itself is mimicing the effects of caloric restriction. Once again, this is something we had both already agreed upon being the case for this dialogue. So yes, I’m telling you what your hypothesis is. And I am right to do so—because I am using your own statements in an internally consistent manner to themselves.
This, by the way, represents another instance of you making an internally inconsistent statement.
We are. You, however, are introducing the assumption—unecessary and unwarranted based on the currently available datasets for this dialogue—that overconfidence is yet relevant to the conversation.
We simply haven’t gotten anywhere that justifies the belief or assumption of overconfidence.
Unfortunately, I’m being forced to update my beliefs in the direction of the elimination of the assumption. There’s too many internally inconsistent statements you continue to make, and you claim points of fact as evidence for positions that the datasets in question directly contradict. (Case in point: your continued use of Stipp’s text as a justification of the belief in mainstream medical acceptance of antagapics research. Additionally: Your notional rejection of the research done by geriontologists on the behaviors of aging as being relevant to the question of how aging occurs.)
What it means for someone to be rational doesn’t really have a good generic. There’s some intuition behind it that seems ok. But even that is simply a bad premise to use. It doesn’t apply to most humans. Assuming even an approximate degree of rationality is probably not a justified assumption. You seem to be making a point that you are willing to update to conclude that someone isn’t that rational. But whether you are willing to update or not isn’t terrible relevant. I can assign a low probability to the sun rising each day and every time it does rise update accordingly. This isn’t a good approach.
This suggests that we have different ideas of what constitutes plain language or have severely different communication styles. In such contexts it helps to just spell everything out explicitly. It does seem that there’s a similar problem going both ways. Look for example at how you seem to have interpreted my comment about not assuming rationality in others. In this context it seems that you seem to think that I parsed your statement as you insisting that everyone is rational. In contrast if you reread what I wrote you’ll see that I understood you perfectly and was objecting to the assumption as a starting assumption whether or not you would then update given evidence. The fact that as discussed in the other piece of this thread you totally missed my remark about fusion power suggests that there are some serious communication gaps in this conversation.
I’m not sure where we agreed to this claim. Resveratrol has features that do look very similar to those caloric restriction, but I don’t see anywhere we agreed to the claim that it is mimicking caloric restriction. Could you point to me where we agreed to that? I’ve just looked over the thread and I don’t see anywhere where I made that statement Since you’ve stated that you are likely not going to be continuing this conversation, can someone else who is reading this comment if they thought I said so and if so point where I did so? It is possible that I said something that is being interpreted that way. I’ve tried rereading the conversation to find that point and there’s nothing that seems to do that. But this could very well be due to the illusion of transparency in reading my own words.
So this seems again to be a failure to communicate between us. I didn’t reject that; their data is very important to how aging occurs. There data as it exists simply doesn’t rule out the possibility I outlined to you for very old ages That’s not the same thing at all.
Ok. This is actually a major issue: the outside view for human predictions about technology is that they are almost always overconfident when they make predictions. Humans in general have terrible overconfidence problems. Even if one had absolutely no specific issue with a given technology one would want to move any prediction in the direction of being less confident. We don’t need to go anywhere to justify a suspicion of overconfidence. For an individual who is marginally aware of human cognitive biases, that should be the default setting.