One major issue is diminishing marginal returns. It isn’t at all obvious that spending a 100 million dollars now would do much. One question one needs to ask for any large scale allocation of resources is whether an area of research is saturated. It isn’t obvious that longevity research’s rate limiting step is money (I suspect it is, but I don’t know of any strong evidence of that claim and I’ve never seen Aubrey De Gray or any associated with Methuselah say that explicitly.)
Another, thematically related area is cryonics. In this case, the relevant issue isn’t research, but simple preservation. If cryonics became cheaper, more people would be willing to do it, which would likely make it more socially acceptable and create a reinforcing effect. Even a comparatively small cash injection into cryonics (say 30 or 40 million) would potentially have a large impact.
The fact that it is an existing technology does increase the relative importance of infrastructure and adoption versus research. But for a certain class of scenarios, research into damage-prevention is just as relevant as any other anti-aging tech (perhaps more relevant because it has a higher chance of being a factor in one’s own survival). This happens to be the class of scenarios popular among people who haven’t drunk the singularity Kool-Aid, so to speak.
If you talk to a typical biologist today, someone like Andrea Andreadis or PZ Myers, they will most likely tell you stuff like:
Cryopreservation damages too many cells for reanimation to be likely.
Ischemia does enough damage during the first few minutes to be very concerned about.
Uploading to a computer is probably not possible because human minds are not analogous enough to computers.
The brain is so tied in with other body systems that you’ll probably lose your identity if you only save the brain.
We folks with a computer science or engineering background tend to regard these claims with suspicion—maybe that indicates that there’s something we get that they don’t. But they are there, and probably for a reason. A billionaire not specializing in any science might be more rational to take these biologists at face value even if they are wrong.
I do believe singularity style events have significant probability. We certainly may well be uploading humans, designing FAI, and/or using molecular nanotech well within the next century. But I think it bears emphasis that we might not. And even if we do, significant information loss from suboptimal preservations may still be irreversible.
What does this mean? More groups working independently on solving a problem will not increase the probability of a solution being found in a given timeframe?
What does this mean? More groups working independently on solving a problem will not increase the probability of a solution being found in a given timeframe?
It means that it won’t do so substantially. If many people are duplicating the same discoveries within a short time span then a field may be experiencing saturation. There’s a related issue which is that in many fields, there’s a limited set of people actually qualified to do research in an area, and so pumping in more money won’t increase the number of people doing research (although it might increase the number of people who assert that their research is connected to the problem at hand). But both forms of saturation have similar end results in practice: the marginal return of throwing in more resources becomes so small that you might as well aim those resources elsewhere.
That sounds pretty sane, but if I look at it from the point of view of “making a longevity breakthrough happen sooner than it otherwise would, over a period of many years”, then too few qualified people is not a show stopping problem, it only means that educating and training people so there are more qualified people in the near future is a good next step.
Lots of people making the same discoveries within a short time span is a much more interesting limit.
One major issue is diminishing marginal returns. It isn’t at all obvious that spending a 100 million dollars now would do much. One question one needs to ask for any large scale allocation of resources is whether an area of research is saturated. It isn’t obvious that longevity research’s rate limiting step is money (I suspect it is, but I don’t know of any strong evidence of that claim and I’ve never seen Aubrey De Gray or any associated with Methuselah say that explicitly.)
Another, thematically related area is cryonics. In this case, the relevant issue isn’t research, but simple preservation. If cryonics became cheaper, more people would be willing to do it, which would likely make it more socially acceptable and create a reinforcing effect. Even a comparatively small cash injection into cryonics (say 30 or 40 million) would potentially have a large impact.
The fact that it is an existing technology does increase the relative importance of infrastructure and adoption versus research. But for a certain class of scenarios, research into damage-prevention is just as relevant as any other anti-aging tech (perhaps more relevant because it has a higher chance of being a factor in one’s own survival). This happens to be the class of scenarios popular among people who haven’t drunk the singularity Kool-Aid, so to speak.
If you talk to a typical biologist today, someone like Andrea Andreadis or PZ Myers, they will most likely tell you stuff like:
Cryopreservation damages too many cells for reanimation to be likely.
Ischemia does enough damage during the first few minutes to be very concerned about.
Uploading to a computer is probably not possible because human minds are not analogous enough to computers.
The brain is so tied in with other body systems that you’ll probably lose your identity if you only save the brain.
We folks with a computer science or engineering background tend to regard these claims with suspicion—maybe that indicates that there’s something we get that they don’t. But they are there, and probably for a reason. A billionaire not specializing in any science might be more rational to take these biologists at face value even if they are wrong.
I do believe singularity style events have significant probability. We certainly may well be uploading humans, designing FAI, and/or using molecular nanotech well within the next century. But I think it bears emphasis that we might not. And even if we do, significant information loss from suboptimal preservations may still be irreversible.
I think you’re underestimating how gut level / instinctive people’s revulsion of cryonics is.
I suspect that weirdness is much more of an issue than revulsion.
What does this mean? More groups working independently on solving a problem will not increase the probability of a solution being found in a given timeframe?
It means that it won’t do so substantially. If many people are duplicating the same discoveries within a short time span then a field may be experiencing saturation. There’s a related issue which is that in many fields, there’s a limited set of people actually qualified to do research in an area, and so pumping in more money won’t increase the number of people doing research (although it might increase the number of people who assert that their research is connected to the problem at hand). But both forms of saturation have similar end results in practice: the marginal return of throwing in more resources becomes so small that you might as well aim those resources elsewhere.
That sounds pretty sane, but if I look at it from the point of view of “making a longevity breakthrough happen sooner than it otherwise would, over a period of many years”, then too few qualified people is not a show stopping problem, it only means that educating and training people so there are more qualified people in the near future is a good next step.
Lots of people making the same discoveries within a short time span is a much more interesting limit.