I have just donated $10,000 to the Immortality Bus, which was the most rational decision of my life
I have non-zero probability to die next year. In my age of 42 it is not less than 1 per cent, and probably more. I could do many investment which will slightly lower my chance of dying – from healthy life style to cryo contract. And I did many of them.
From economical point of view the death is at least loosing all you capital.
If my net worth is something like one million (mostly real estate and art), and I have 1 per cent chance to die, it is equal to loosing 10 k a year. But in fact more, because death it self is so unpleasant that it has large negative monetary value. And also I should include the cost of lost opportunities.
Once I had a discussion with Vladimir Nesov about what is better: to fight to immortality, or to create Friendly AI which will explain what is really good. My position was that immortality is better because it is measurable, knowable, and has instrumental value for most other goals, and also includes prevention of worst thing on earth which is the Death. Nesov said (as I remember) that personal immortality does not matter as much total value of humanity existence, and more over, his personal existence has no much value at all. All what we need to do is to create Friendly AI. I find his words contradictory because if his existence does not matter, than any human existence also doesn’t matter, because there is nothing special about him.
But later I concluded that the best is to make bets that will raise the probability of my personal immortality, existential risks prevention and creation of friendly AI simultaneously. Because it is easy to imagine situation where research in personal immortality like creation technology for longevity genes delivery will contradict our goal of existential risks reduction because the same technology could be used for creating dangerous viruses.
The best way here is invest in creating regulating authority which will be able to balance these needs, and it can’t be friendly AI because such regulation needed before it will be created.
That is why I think that US needs Transhumanist president. A real person whose value system I can understand and support. And that is why I support Zoltan Istvan for 2016 campaign.
Me and Exponential Technologies Institute donated 10 000 USD for Immortality bus project. This bus will be the start of Presidential campaign for the writer of “Transhumanist wager”. 7 film crews agreed to cover the event. It will create high publicity and cover all topics of immortality, aging research, Friendly AI and x-risks prevention. It will help to raise more funds for such type of research.
Thanks for donating towards longevity research!
For future donations, I might consider donating directly to medical research targeted at aging, like the SENS Research Foundation. Political donations seem to be high variance, with a sizable amount of the potential value actually being negative, because good ideas that get associated with ‘bad’ people become bad ideas (politically, at least). Consider the satellite Gore proposed to monitor the Earth’s climate from the L1 point; it was ready to launch by 2003, but sat ignored in a warehouse possibly because of animosity of the Bush administration towards Gore.
Likewise, we don’t want any longevity institutes at the NIH to be like the NCCIH. A firm scientific backing and ongoing research with successes along the way will likely go farther than political fiat.
I agree that SENS is likely the best place to send donations to promote longevity research.
Actually, it’s a shame that longevity research doesn’t get mentioned by the Effective Altruism movement very often. I’m just now casually wondering if there might be enough value in having a Givewell-like nonprofit evaluation organization focused on longevity research to justify creating such an organization. Note that Animal Charity Evaluators is an animal-based Givewell-like nonprofit evaluation organization—which means that this sort of thing has been done before.
This having been said, Aubrey de Grey already seems incentivized to fund the most cost-effective anti-aging research first, so directly funding SENS might be everyone’s best bet.
It’s both a research intensive area and one that has traditionally used up weirdness points/not garnered much interest from the usual philanthropy crowd. Probably of interest once EA is significantly bigger.
Another good non-profit research institution that funds a lot of good aging research is the Buck Institute for Research on Aging.
http://www.thebuck.org/
It probably depends on if you think the SENS approach or more mainstream types of aging research are more likely to produce more significant results. It’s worth mentioning that Google’s “Calico” company has recently announced a partnership with Buck.
Animal Charity Evaluators makes sense because of a values difference: they provide recommendations for people who prioritize animals much more than is typical. I don’t think there’s something similar with anti-aging; it’s just that GiveWell’s not yet in a position to evaluate more researchy organizations, though this is changing as the Open Philanthropy Project progresses.
(I do think a GiveWell competitor would be valuable, but in the cause-neutral sense of one looking at all the potential funding-constrained altruistic options and picking the best ones.)
Aubrey de Grey has a fairly specific road map and is going to fund project on that road map. If you disagree with his road map you can think that Antiaging money should be spend differently.
It’s also interesting to note that the NCCIH isn’t the result of presidential action.
Politically focusing on getting a congressman or senator elected is much easier than running for president.
You often see third parties putting fourth a candidates for president, not because they expect to win, but just as a way to try to get more attention for their specific issues.
If I will donate to any actual research organisation I would do it for Buck institute. But general luck of donations to any of such organisation mean that the cause is not well enough popularized.
Interesting, I’m curious why—it looks to me like they’re adequately funded for the foreseeable future. Do you think their research plan is better / their prestige is higher / something else?
What does “adequately funded” mean in this context? Certainly labs at the Buck Insitute could easily expand in personal or experiments given more money. Importantly, SENS and BI also collaborate, and many SENS grant dollars are awarded to scientists at BI (I don’t know the exact numbers, but last time I looked into it, this was the case).
Looking into a little less shallowly, I overestimated the amount of their funding that comes from the Buck Trust Fund, but that amount still seems larger than the total budget of SENS. (I don’t think SENS has any permanent support on nearly the same scale.)
“Appearing as a 40-foot coffin, the Immortality Bus will roll down American highways stopping at rallies and events, instigating the kind of clashes and debates that this country (and the world) needs to challenge archaic cultural ideals that are holding science, technology, and medicine back.”
“To do all we hope to do, we need to buy the bus and make it look like a coffin. But we need more too: We want a life-sized, interactive robot on board, drones following us, a biohacking lab for experimenting on ourselves, Virtual Reality equipment, lots of public event materials, and, of course, fuel.”
“We’ll continue in the Bible Belt and visit megachurches, hoping to covert the religious to reason and transhumanism (and showing that formal religion and greatly extended lifespans can happily co-exist). In Detroit, we’ll visit factories where robots have taken jobs (and ask our own trainable robot Jethro Knights, what he thinks). In Massachusetts, we’ll stop at MIT and get chip implants”
… … this is a bad reality show for personal promotion, nothing more and nothing less. It’s not even a good one. I just showed the indiegogo page to a friend who literally called it ‘facepunch worthy’.
“Zoltan Istvan is a man on a mission to end death. … For whatever it’s worth, Zoltan has been involved in extreme adventures his entire life, and is the inventor of volcano boarding (which is just like it sounds).”
I sense an inconsistency.
Not necessarily. I’ve heard some people describe the effort to end aging like this: “The goal isn’t to live forever. The goal is to live to the age of 250 with the body of a 25 year old and then die in a freak skydiving accident.”
Let me know when they stop by Massachusetts, as I want to ask them how and why they think one can put forward a positive vision for a transhuman future based on Ayn “even children with copies of evil overlords in their heads don’t fall for her shit” Rand.
I managed to briefly see a copy of the book. It would’ve been a hilarious parody of so many things if it weren’t actually goddamned serious.
Edit: Seriously, it’s like a Chick Tract had a child with Randian fantasy and bad self-insert Mary Sue fanfiction while coating itself in the accoutrements of science fiction.
Fictional evidence, much?
Fictional evidence as a joke, on top of what I’d consider loads and loads of real-world evidence.
This type of controversy can actually be a sound marketing strategy if you can use the moment in the spot light to get your message out. The problem here being that this is being done in such a weird way that you immediately taint your brand by doing it.
Unless your situation is far from typical, your probability of death within a year at age 42 is far less than 1%.
The table suggest probability of 0,2 per cent for 42, but as I have very high blood pressure, live in a country with medium man life expectancy around 60-65, and my parents died at 63 and 73, I feel that my situation is not so good.
Assuming that you live in Russia (since you wrote you are signed up with Cryorus) you have to keep in mind that the low male life expectancy of your country is probably related to high alcohol consumption, which is a risk factor that you can control.
Ok. My link was also for the USA and you are correct that there would be differences in other countries.
Given how campaign finance laws work, doesn’t Zoltan run into trouble if he raises money from foreign nationals?
Wouldn’t it be better to just reduce your probability of death using existing technologies? 65 is a really low life expectancy for somebody who has already survived to 42.
If you think that there is promise in brain preservation (e.g., cryonics), your money may also be highly leveraged if you donate to the Brain Preservation Foundation (disclosure: I’m a volunteer for BPF). Cryonics is not a static technology—this is the #1 lesson from Mike Darwin’s blog, who is probably the most knowledgeable person about cryonics alive. And other technologies for brain preservation, such as aldehyde-stabilized cryopreservation, are possible. BPF has a track record of providing grants that has already led to promising research avenues. However, there is basically no funding for this research, as publicly evidenced by the fact that BPF’s two largest donations over the past two years have been for $1000 each.
Personally I think that it is one of very important goals, may be next time.
Great! Please feel free to also contact Ken Hayworth if you are interested in more information: http://brainpreservation.org/content/contact
What do you think the probabilty is of Zoltan getting elected? I’d put it lower than 5%.
Lower than .00005%.
I’ll take those odds.
That would still make him more likely than if we were picking a president at random from the adult population. I think that’s untrue.
You can pretty easily think of “apocalyptic” scenarios in which Zoltan would end up getting elected in a fairly normal way. Picking a president at random from the adult population would require even more improbable events.
I loved this comment, but then realized I may not have understood it—is the apocalyptic scenario one where a bunch of people die, but somehow those remaining tend to be Zoltan supporters?
I actually meant it more generally, in the sense of highly unusual situations. So gjm’s suggested path would count.
But more straightforwardly apocalyptic situations could also work. So a whole bunch of people die, then those remaining become concerned about existential risk—given what just happened—and this leads to people becoming convinced Zoltan would be a good idea. This is more likely than a virus that kills non-Zoltan supporters.
I think it’s unlikely that someone actively campaigning to be president is less likely than someone who isn’t.
Why did you pick 5%? That number seems very high for me.
I did the equivalent bet test, and came up with about 5%. I suspect that due to the problems I’ve done calibration training on, I have a very hard time working with extremely low probabilities.
Where did you do your calibration training? On prediction book I think most people would put 0% in the box for Zoltan getting elected in the next election.
I’ve used prediction book rarely, I mostly use the calibration game and the updating game.
What do you mean with “updating game”?
http://rationality.org/apps/
The page lists the calibration game with a link but lists no link for the updating game. Is the updating game something that CFAR uses internally?
http://www.patheos.com/blogs/unequallyyoked/2012/07/play-along-with-rationality-camp-at-home.html has a link
edit: https://groups.google.com/forum/#!topic/lesswrongslc/DuWDe_km88w has more links. They seem to be malformed by google, but manually fixing them works.
Mac: https://dl.dropbox.com/u/30954211/RationalityGames/UpdatingGame%28Mac%29.app.zip Android: https://dl.dropbox.com/u/30954211/RationalityGames/UpdatingGame%28And%29.apk
I actually can’t recall how I got the updating game… I believe it’s on the android store somewhere, but really hard to find.
We all do, err all but .001% or whatever of us.
But calibration training should theoretically should fix these exact issues—I’m going to try to find a better calibration question set that can help me with this.
I am not sure about that—why do you think so?
Because it’s deliberate practice in debiasing—it’s specifically created to train out those biases/
Edit: To be clear, I’m not sure about it either, but theoretically, that’s what’s supposed to happen.
Bias is not the only source of errors. It is notoriously hard to come up with probability estimates for rare events, ones that are way out in the tails of the distribution.
Yes, I don’t think calibration training will cause me to be able to figure out the difference between something with a .00005% chance and something with a .000005% chance, but it should be able to make me not estimate something at 5% when logic says the possibility is orders of magnitude below that.
I think he may be elected in 2024, but the main point of campaign is to raise awareness about life extension and FAI topics.
By associating them with extreme weirdness?
What makes that 2024 thing even remotely theoretically possible?
Zoltan is articulate, extremely good looking, and willing to put in a lot of work to become president. Imagine one or both of the major U.S. political parties becomes discredited and Zoltan gets significant financial support from a high-tech billionaire. He could then have a non-trivial chance of becoming president, although the odds of this ever happening is still under .1%.
But what he talks about is completely unaligned with what 99% of the electorate gives half a shit about. Even though I suppose recent political-theater events in the united states proves that giving off a strong crackpot vibe is not an automatic disqualification, there is that to contend with.
I’d guess less than 5% chance for each major party to get discredited, maybe 50% chance that after that a high-tech billionaire decides it’s a good time to try to shape politics, maybe a 2% chance that s/he chooses Zoltan, and no more than a 20% chance that Zoltan wins after all that happens. I make that about a 0.0005% chance, being quite generous.
So, yeah, “remotely theoretically possible” is about as far as it goes.
Billionaires attempt to shape politics right now and I don’t see why would they stop. I think that the 50% chance is actually a 100% chance. However the probability of choosing specifically Zoltan I would estimate as considerably less than 2%.
If both parties become discredited I say at least 80% chance that more than one high-tech billionaire will try to shape politics, but otherwise a good estimate.
A chance under 0.1% sounds trivial to me.
I hope you have signed up for cryonics.
I signed for Russian Cryorus, but may be later will add up Alcor
I am confused. Since you know he has a very, very low chance of winning, what is the expected return on investment in the almost certain not winning case? Do you think the campaign will have an effect anyway, “raising awareness” and similar things?
Thanks for donating to a longevity-related cause. Yay for living longer!
This comment is followed by my previous one. I can’t make a chain comment due to multiple downvotes. I am not trying to be disrespectful. I was raising a genuine question. I thought Effective Altruism was all about humanity saving people from diseases and famine. I am still trying to understand what Transhumanism is for. If you seek immortality while others try to save miserable lives from poverty, diseases, lack of water, and so on, are you assuming that the resources we have on our earth is limitless? They are not. We don’t even have to go to poverty-striken countries in Africa. If China enlarges their middle class, you will already feel the world resources being absorbed to one huge country. We will be in trouble with our population. It is still growing and the spectrum of inequality still makes pursuit of immortality a selfish choice.
I think that aging (and death) now the is the main reason of human suffering, not poverty or hunger. It affects all human population. EA must fight aging. The second one in my opinion is depression. EA must fight suffering. And of cause Friendly AI and global catastrophic risks prevention is important goals for any EA.
Effective altruism is trying to find the most efficient help people who are suffering in general. Yes, helping poor people pull themselves out of poverty is a part of that. So is funding medical research that will help people suffering from the terrible diseases of aging; Alzheimer’s disease, heart disease, Parkinson’s , and so on. In fact, those diseases probably cause even more suffering in the world today then famine and poverty.
As far as “what transhumanism is for”, most people would say that it’s for making human live better in general, all over the world. In general, improving technology should create more wealth for everyone; we should be able to improve our lives and improve the lives of people in the third world at the same time. Improving technology is the big reason that many people in the world are better off these days, and that will continue to happen.
No one thinks that the resources on Earth are limitless. And, yes, overpopulation could in the long run be an issue that affects that, although that actually has more to do with the birth rate then with the death rate. You can have a rapidly growing population even with a short lifespan, and you can have a pretty stable population where the average lifespan is 1000 years, it just depends on the birth rate.
Longevity research has improved at a rate of about 1 year longer every 10 years. The probability of a major breakthrough in the next 30 years is quite low, but the existence of an FAI substantially increases that probability. I would argue the probability of a superintelligence being created multiplied by the probability of that entity increasing human lifespan beyond current research trajectories is greater than the same result occurring in the absence of a superintelligence. FAI isn’t just important because of the need to preserve the human race; it increases the growth rate of all other technologies.
We haven’t cured aging, we’ve just improved treatment for a lot of specific diseases. A cure for aging would massively improve life expectancy almost overnight. And it wouldn’t be predictable from previous trends of increased vaccinations or whatever.
A cure for aging would be almost as difficult as “cure” for entropy.
I mean, it wouldn’t necessarily be physically impossible, but short of massive nanotech, I don’t see how you could prevent DNA mutations and oxidative damage to extracellular proteins from accumulating over the years.
This is wrong—The body isn’t a closed system, but an ongoing exporter of entrophy. There is no fundamental reason why “better repair mechanisms” wouldn’t result in an permanent health. I don’t like calling this immortality, because.. well, mishap and violence will still get you eventually, but the whole decay and slow dying thing isn’t written into the laws of physics or even biology. It’s just that Azathoth never had a reason to fix it.
There are animals which don’t appear to age. And there have even been some successful anti aging treatments applied on animals. Even simple stuff like caloric restriction might significantly increase lifespan.
My ideal method would be cloning bodies and doing brain transplants. Of course you still need to prevent damage to the brain itself, but that solves like 90% of the other problems which occur elsewhere. And it’s been shown that young blood helps the brain too. At some point we might be able to grow new brain tissue as well and keep you alive Ship of Theseus style.
Same way you prevent cuts and bruises from accumulating over the years: Repair and replacement. Your body doesn’t stay whole by preventing cuts from happening, but by effectively patching over them when they do. In principle there’s no reason this couldn’t be applied to DNA and oxidative damage. At least for DNA we know that mutation rates vary between organisms (as indeed do rates of aging), so it’s theoretically possible to lower the mutation rate.
I wouldn’t expect there to be a single cure that would change things “overnight”. Even de Grey talks about 7 different categories of aging damage, each of which will need a different type of treatment; and those are just general categories, most likely there will be different treatments for different systems in the body as well. And he’s probably somewhat optimistic in his description of the problem.
However, I think it’s entirely possible that we’ll make enough progress in enough different areas to reach longevity escape velocity in our lifetimes. It’s not going to be a single breakthrough that happens overnight though.
It has more or less been following a logistic growth curve, with the majority of the change occurring around midcentury and the vast majority of longevity increase occurring in early life due to infectious disease. The oldest people have not gotten appreciably older over civilized human history either.
TIME TO GET RELIGION: AI WILL TORTURE AND ENSLAVE HUMANS TO LEARN FROM AND DOMINATE US https://medium.com/@MexConex/endgame-for-human-race-ai-will-enslave-humans-to-learn-from-and-dominate-us-215ed602a66b
What is the point of Transhumanism? If you are determined to live for ever, who else should die so that you can have enough resources for a comfortable life? Anyone who is too poor to have costly technologies for a longevity? This idea of Transhumanism and a discussion of Effective Altruism mingled in this LW community always confused me.
Given low birth rates it’s not clear that somebody has to die for there to be enough resources.