If our goal was to have current SI staff make lots of money, there are much better ways to do this than to monetize research on, say, nuclear risk.
This isn’t an either-or proposition, though. Sure, nuclear risk and formal philosophy may not be huge money-makers, but what about info-computationalism, connectomics, mathematical psychology, decision analysis, human motivation, algorithmic information theory, the neuroscience of concept representation, not to mention the Riemann hypothesis ? The algorithmic information theory alone should have massive practical applications, assuming I understand the term correctly.
Plus, there’s still all that fame to consider. If you made significant progress towards solving something like the Riemann hypothesis or mathematical psychology, you would, at the very least, make a lot of smart people look up and notice you in a very positive light. You could then attract their talents toward SIAI… At which point (now that I think about it) the ability to offer them a nice salary would come in pretty handy.
As it happens, we recently have been seriously discussing doing one of the things you mention, but I shall not reveal which. And it wouldn’t be about the money per se, but about improving our ability to recruit the FAI team members we need. That’s our major bottleneck. The monetary cost of running an FAI team is trivial in terms of world GDP — something like $5-$20 million per year. But there are a fixed number of young John Conways in the world who also can be persuaded that Friendly AI is the most important thing they can do with their life, and that number looks to be frighteningly close to 0. Besides, if we solve the recruiting problem, I don’t think we’ll have trouble getting a few billionaires to fund 9 young John Conways doing Friendly AI research. We just need enough money to find those young John Conways and become the kind of organization that can productively use them.
...for the FAI team part of SI’s plans, that is. Of course we also engage in movement-building, academic outreach, etc.
In theory it doesn’t seem like you’d have to persuade them that FAI was the most important thing they could do with their life. Presumably there are a few young John Conways at Google, but I doubt any see Google as the most important thing they could do with their life. In other words, you might just need salary, visibility, and prestige comparable to current young John Conway employers.
For instance, what if there was an FAI research team affiliated with some prestigious university that was getting a moderate amount of positive press coverage?
By “awesome”, I don’t mean something like “you get to shoot nerf guns at work !”, but rather something like, “you get to solve interesting problems at the forefront of human knowledge”, or “you get to improve the world in a significant way”.
Approach #1 won’t work for you, because so far the SIAI has not accomplished anything truly world-changing (or even discipline-changing); nor are you planning on accomplishing anything like that in the near future (at least, not publicly), preferring to focus instead on academic outreach, etc. Sure, you have plans to work on such things eventually, but you need to attract that John Conway now. Ideally, he might want to join you simply because he believes in the cause, but, as you said, the number of such people in the world may be 0.
So, you’re left with option #2: money. Look at it this way: you’re doing all that applied research already, why let it go to waste when you can use it to bootstrap your entire pipeline in record time ?
When #2 happens at SI, it doesn’t look like SI making money. It looks more like Michael Vassar stepping down as President at Singularity Institute and hiring lots of rationalists to start a potentially very disruptive company.
If Personalized Medicine succeeds and becomes a multi-billion dollar company, it would almost definitely fund an FAI team. Which is great and not at all impossible or even unlikely, but it’s not going to happen in record time.
This isn’t an either-or proposition, though. Sure, nuclear risk and formal philosophy may not be huge money-makers, but what about info-computationalism, connectomics, mathematical psychology, decision analysis, human motivation, algorithmic information theory, the neuroscience of concept representation, not to mention the Riemann hypothesis ? The algorithmic information theory alone should have massive practical applications, assuming I understand the term correctly.
Plus, there’s still all that fame to consider. If you made significant progress towards solving something like the Riemann hypothesis or mathematical psychology, you would, at the very least, make a lot of smart people look up and notice you in a very positive light. You could then attract their talents toward SIAI… At which point (now that I think about it) the ability to offer them a nice salary would come in pretty handy.
As it happens, we recently have been seriously discussing doing one of the things you mention, but I shall not reveal which. And it wouldn’t be about the money per se, but about improving our ability to recruit the FAI team members we need. That’s our major bottleneck. The monetary cost of running an FAI team is trivial in terms of world GDP — something like $5-$20 million per year. But there are a fixed number of young John Conways in the world who also can be persuaded that Friendly AI is the most important thing they can do with their life, and that number looks to be frighteningly close to 0. Besides, if we solve the recruiting problem, I don’t think we’ll have trouble getting a few billionaires to fund 9 young John Conways doing Friendly AI research. We just need enough money to find those young John Conways and become the kind of organization that can productively use them.
...for the FAI team part of SI’s plans, that is. Of course we also engage in movement-building, academic outreach, etc.
In theory it doesn’t seem like you’d have to persuade them that FAI was the most important thing they could do with their life. Presumably there are a few young John Conways at Google, but I doubt any see Google as the most important thing they could do with their life. In other words, you might just need salary, visibility, and prestige comparable to current young John Conway employers.
For instance, what if there was an FAI research team affiliated with some prestigious university that was getting a moderate amount of positive press coverage?
Why not ?
As John_Maxwell_IV points out below, this is a problem you can solve with money.
More specifically, young John Conways would consider donating their talents to an organization for two primary reasons:
1). It’s really really awesome, or
2). It’s really really lucrative.
By “awesome”, I don’t mean something like “you get to shoot nerf guns at work !”, but rather something like, “you get to solve interesting problems at the forefront of human knowledge”, or “you get to improve the world in a significant way”.
Approach #1 won’t work for you, because so far the SIAI has not accomplished anything truly world-changing (or even discipline-changing); nor are you planning on accomplishing anything like that in the near future (at least, not publicly), preferring to focus instead on academic outreach, etc. Sure, you have plans to work on such things eventually, but you need to attract that John Conway now. Ideally, he might want to join you simply because he believes in the cause, but, as you said, the number of such people in the world may be 0.
So, you’re left with option #2: money. Look at it this way: you’re doing all that applied research already, why let it go to waste when you can use it to bootstrap your entire pipeline in record time ?
When #2 happens at SI, it doesn’t look like SI making money. It looks more like Michael Vassar stepping down as President at Singularity Institute and hiring lots of rationalists to start a potentially very disruptive company.
If Personalized Medicine succeeds and becomes a multi-billion dollar company, it would almost definitely fund an FAI team. Which is great and not at all impossible or even unlikely, but it’s not going to happen in record time.
Eh, there’s too much for me to explain, here. (Which is not to say you’re wrong.)
I welcome you to pick up this thread again when I get around to discussing building an FAI team in this series.