Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial
We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.
There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk’s donation aims to support precisely this type of research: “Here are all these leading AI researchers saying that AI safety is important”, says Elon Musk. “I agree with them, so I’m today committing $10M to support research aimed at keeping AI beneficial for humanity.”
[...] The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. [...]
The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy (a detailed list of examples can be found here [PDF]). “Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere”, says FLI co-founder Viktoriya Krakovna.
[...] Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories.
Elon Musk donates $10M to keep AI beneficial, Future of Life Institute, Thursday January 15, 2015
Excellent news. Considered together with the announcement of AI scientists endorsing a statement in favour of researching how to make AI beneficial, this is the best weeks for AI safety that i can remember.
Taken together with the publication of Superintelligence, founding of FLI and CSER, and teansition of SI into a research organisation MIRI, it’s becoming clearer that the last few years have started to usher in a new chapter in AI safety.
I know that machine learning capabilities are also increasing but let’s celebrate successes like these!
Now we can say that we were into AI risk before it was cool.
If you’d asked me two years ago I would have put today’s situation in the most optimistic 10% of outcomes. It’s nice to be wrong in that direction :)
Damn. Good point. Woo!
Is it excellent news? Ignoring the good that will come from the money, shouldn’t the fact that Musk is donating the funds increase our estimate that AI is indeed an existential threat? Imagine you have a condition that a very few people think will probably kill you, but most think is harmless. Then a really smart doctor examines you, says you should be worried, and pays for part of your treatment. Although this doctor has helped you, he has also lowered your estimate of how long you are going to live.
Musk’s position on AI risk is useful because he is contributing his social status and money to the cause.
However, other than being smart, he has no special qualifications in the subject—he got his ideas from other people.
So, his opinion should not update our beliefs very much.
Should not update our beliefs much. Musk is a smart guy, he has access to roughly the same information as we do, and his interpretation of that information is that the danger is enough to justify him in spending millions on it. But not, e.g., enough for him to drop everything else and dedicate his life to trying to solve it.
I think most of us should adjust our beliefs about the danger either up or down a little in response to that.
Disagree. Meet a lot of the Less Wrong style people in real life and a totally different respectable elite emerge than what you see on forums and some people collapse. Musk is far more trustworthy. Less wrong people over-estimate themselves.
Will you elaborate?
Uh. I don’t know, you see many more dimensions that causes you to harshly devalue a significant amount of individuals while finding you missed out of many good people. Less Wrong people are incredibly hit or miss, and many are “effective narcissists” and have highly acute issues that they use their high verbal intelligence to argue against.
Also there exists a tendency for speaking in extreme declarative statements and using meta-navigation in conversations as a crutch for lack of fundamental social skills. Furthermore I have met many quasi-famous LW people that are unethical in a straightforward fashion.
A large chunk of less wrong people you meet, including named individuals turn out to be not so great, or great in ways other than intelligence that you can appreciate them for. The great people you do meet however significantly make up for and surpass losses.
When people talk about “smart LW people” they often judge via forum posts or something, when that turns out to be only a moderately useful metric. If you ever meet the extended community I’m sure you will agree. It’s hard for me to explain.
tl;dr Musk is just more trustworthy and competent overall unless you are restricting yourself to a strict subset of Less Wrong people. Also LW people tend to overestimate how advanced they are compared to other epistemic blocs that are as elite, or are more elite.
http://lesswrong.com/user/pengvado/ <---- is some one I would trust. Not every other LW regular.
The halo effect is when your brain tricks you in to collapsing all of a person’s varied attributes and abilities in to a single dimension of how much you respect them. Dan Quayle’s success in politics provides limited evidence that he’s a good speller. Satoshi Nakamoto’s high status as the inventor of Bitcoin provides limited evidence that he is good looking. Justin Bieber’s success as a pop star provides limited evidence that he’s good at math. Etc.
Elon Musk is famous for being an extremely accomplished in the hi-tech world. This provides strong evidence that Musk is “competent”. “Trustworthy” I’m not as sure about.
Less Wrong users can be highly rational and make accurate predictions worth listening to while lacking “fundamental social skills”.
An individual Less Wronger who has lots of great ideas and writes well online might be too socially anxious to be a good conversationalist.
I was being very generous in my post. Less Wrong has many people that have megalomaniac tendencies. This would be almost impossible to argue against. I gave a wide margin and said many great people, but to pretend that there aren’t illegitimate people who hide and are many in number is something else entirely.
Elon Musk is certainly trustworthy. You can calculate trustworthiness via amount of accumulated capital because the effect trust has on said serial accumulation.
You have referenced relatively elementary mistakes that do not apply in this situation. Your examples are extremely off base
Dan Quayle as a good speller is irrelevant and unbelievable that is not even in the same ballpark of what I was saying
Satoshi Nakamoto being good looking because of his invention is ridiculous
Just Bieber example is worse.
Social approximation is far more robust than peoples assumed identities and online persona’s. I am not foolishly judging people randomly. Most of the “effective altruists” you meet are complete pushovers and if they wish to justify their megalomaniac tendencies constantly going on and on about how effective they are, they should be able to put their foot down and stop bad people and set boundaries, or cut the effective act until they are able to.
They use their “effective altruism” to make up for the fact of the huge ethical opportunity costs they miss by what they DO NOT do. They then engage in extremely obscurant arguments as cover.
As an example see the lack of ethics that many people complain about in mathematics ie Grothendieck or Perelman.
Mathematicians trends towards passivity and probably are “good people” but they do not stop their peers in engaging unethical behavior and thus that is the sordid state of mathematics. Stopping bad people is primary, doing good things is second. Effective altruism is incomplete until they admit that it is not doing good things, but stopping bad things, and you need a robust personality structure to do so.
Your comment is enlightening, thanks for sharing your thoughts.
Test
What about determining how much money investors will spend attempting to develop an AI, or how difficult it would be to coordinate the activities of programmers to create an AI, or how much to trust the people who claim that unfriendly AI is a threat.
I was already pretty convinced it was a problem, but it turns out very pessimistic about the chances of anyone taking it seriously, so the effect on the latter greatly outweighs the effect on the former for me.
Map, territory.
Sorry general the map you have been using is wrong, the correct one shows that the enemy is about to destroy us. This would be horrible news.
Would it be better to remain ignorant? It’s a false choice if you think the comparison is between being told the enemy is about to destroy us vs the enemy being where we thought they were. The enemy is about to destroy us, whether we know about it or not. The real alternative is remaining ignorant until the very last moment. It is better to be told the truth, no matter how much you hope for reality to be otherwise.
No, and I think Musk is doing a great thing, but the fact that he thinks it needs to be done is not “excellent news”. I think we are talking past each other.
I think that this is almost as much money as has gone into AI existential risk research to all organizations ever.
Yep. Check out the MIRI top donors list to put the amount in perspective.
The survey indicates that LW has nontrivial experience with academia: 7% of LW has a PhD and 9.9% do academic computer science. I wonder if it’d be useful to create an “awarding effective grants” repository type thread on LW, to pool thoughts on how grant money can be promoted and awarded to effectively achieve research goals. For example, my understanding is that there is a skill called “grantwriting” that is not the same as research ability that makes it easier to be awarded grants; I assume one would want to control for grantwriting ability if one wanted to hand out grants with maximum effectiveness. I don’t have much practical experience with academia though… maybe someone who does could frame the problem better and go ahead and create the thread? (Or alternatively tell me why this thread is a bad idea. For example, maybe grantwriting skill consists mostly of knowing what the institutions that typically hand out grants like to see, and FLI is an atypical institution.)
An example of the kind of question we could discuss in such a thread: would it be a good idea for grant proposals to be posted for public commentary on FLI’s website, to help them better evaluate grants and spur idea sharing on AI risk reduction in general?
Edit: Here’s the thread I created.
Do we know why he chose to donate in this way: donating to FLI (rather than FHI, MIRI, CSER, some university, or a new organization), and setting up a grant fund (rather than directly to researchers or other grantees)?
An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.
(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded—rather than it going to one organisation over another. (ii) Given the number of “non-risk” AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.
There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area—this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.
Consider it seed funding for the whole field of AI safety!
Sean (CSER)
Seconded (as an FLI person)
Vika, thank you and all at FLI so much for all you’ve done recently. Three amazing announcements from FLI on each others’ heels, each a gigantic contribution to increasing the chances that we’ll all see a better future. Really extraordinary work.
Thanks Paul! We are super excited about how everything is working out (except the alarmist media coverage full of Terminators, but that was likely unavoidable).
My guesses: he chose to donate to FLI because their star-studded advisory board makes them a good public face of the AI safety movement. Yes, they are a relatively young organization, but it looks like they did a good job putting the research priorities letter together (I’m counting 3685 signatures, which is quite impressive… does anyone know how they promoted it?) Also, since they will only be distributing grants, not spending the money themselves, organizational track record is a bit less important. (And they may rely heavily on folks from MIRI/FHI/etc. to figure out how to award the money anyway.) The money will be distributed as grants because grant money is the main thing that motivates researchers, and Musk wants to change the priorities of the AI research community in general, not just add a few new AI safety researchers on the margin. And holding a competition for grants means you can gather more proposals from a wider variety of people. (In particular, people who currently hold prestigious academic jobs and don’t want to leave them for a fledgling new institute.)
Most of the signatures came in after Elon Musk tweeted about the open letter.
This is awesome!!!
Upvoted for phaticness
Is FLI using this money to fund research proposals? Where would one send such a proposal for consideration?
Yes. The link with guidelines, grant portal, should be on FLI website within the coming week or so.
Interesting. Will MIRI be applying for grants?
Using this picture http://futureoflife.org/images/conference150104.jpg as evidence I imagine they will (and should).
wow everyone is so squinty
It was so bright out! The photo has my eyes completely closed, unfortunately. :)
Next time take lots of pictures and release a composite :)
This is good news. In general, since all forms of existential risk seem underfunded as a whole, funding more to any one of them is a good thing. But a donation of this size for AI specifically makes me now start to wonder if people should identify other existential risks which are now more underfunded. In general, it takes a very large amount of money to change what has the highest marginal return, but this is a pretty large donation.
This will depend on how many other funders are “swayed” towards the area by this funding and the research that starts coming out of it. This is a great bit of progress, but alone is nowhere near the amount needed to make optimal progress on AI. It’s important people don’t get the impression that this funding has “solved” the AI problem (I know you’re not saying this yourself).
Consider that Xrisk research in e.g. biology draws usefully on technical and domain-specific work in biosafety and biosecurity being done more widely. Until now AI safety research hasn’t had that body to draw on in the same way, and has instead focused on fundamental issues on the development of general AI, as well as outlining the challenges that will be faced. Given that much of this funding will go towards technical work by AI researchers, this will hopefully get this side of things going in a big way, and help build a body of support and involvement from the non-risk AI/CS community, which is essential at this moment in time.
But there’s a tremendous amount of work that will need to be done—and funded—in both the technical, fundamental, and broader (policy, etc) areas. Even if FHI/CSER are successful in applying, the funds that are likely to be allocated to the work we’re doing from this pot is not going to be near what we would need for our respective AI research programmes (I can’t speak for MIRI, but I presume this to be the case also). But it will certainly help!
GiveWell is on the case, and has said it is looking at bio threats (as well as nukes, solar storms, interruptions of agriculture). See their blog post on global catastrophic risks potential focus areas.
The open letter is an indication that GiveWell should take AI risk more seriously, while the Musk donation is an indication that near-term room for more funding will be lower. That could go either way.
On the room for more funding question, it’s worth noting that GiveWell and Good Ventures are now moving tens of millions of dollars per year, and have been talking about moving quite a bit more than Musk’s donation to the areas the Open Philanthropy Project winds up prioritizing.
However, even if the amount of money does not exhaust the field, there may be limits on how fast it can be digested, and the efficient growth path, that would favor gradually increasing activity.
Did you mean to write January instead of February?
Damn those time travelers, always forgetting the current date. >.>
Elon Musk donates $10M and it only takes a month from that point to invent an AI capable of time-travel. Truly, money makes the world go round.
It was a typo on the FLI website, which has now been corrected to January.
I’m sure I cut and paste the date from the FLI announcement, so I can only assume that mistake was present there at one point!
This seems like pretty big news. Anyone think this post should be moved to Main?
[pollid:810]
ETA: Is anyone in favor of Nightspacer’s idea below?
[pollid:811]
My only problem with moving this to main is that fewer people check main (and not promoting it would be worst of all) as often. But I could see a case that after a week it could be moved there as it would be a big piece of news that could stay at the top for a while.
Is the proliferation of a policy of not checking main a problem? Shouldn’t we do something about it? Something like posting extremely relevant articles to main?
Seconded. I don’t check Main anymore. Maybe once a month
Ah, good point.
That sounds like it could work. I’ll add it to the poll.
Dale Carrico weighs in:
Futurology’s Shortsighted Foresight on AI
http://www.wfs.org/blogs/dale-carrico/futurologys-shortsighted-foresight-ai
Carrico puts quite a bit of work into some of his posts on his blog, but I wonder why he bothers, given how few comments his posts receive, compared with bloggers who have significant followings like Megan McCardle, Vox Day, Roosh Valizadeh, Heartiste or Steve Sailer.