But “produc[ing] formidable rationalists” sounds like it’s meant to make the world better in a generalized way, by producing people who can shine the light of rationality into every dark corner, et cetera.
Precisely. The Singularity Institute was founded due to Eliezer’s belief that trying to build FAI was the best strategy for making the world a better place. That is the goal. FAI is just a sub-goal. There is still consensus that FAI is the most promising route, but it does not seem wise to put all of our eggs in one basket. We can’t do all of the work that needs to be done within one organization and we don’t plan to try.
Through programs like Rationality Boot Camp, we expect to identify people who really care about improving the world and radically increase their chances of coming to correct conclusions about what needs to be done and then actually doing so. Not only will more highly-motivated, rational people improve the world at a much faster rate, they will also serve as checks on our sanity. I don’t expect that we are sufficiently sane at the moment to reliably solve the world’s problems and we’re really going to need to step up our game if we hope to solve FAI. This program is just the beginning. The initial investment is relatively small and, if we can actually do what we think we can, the program should pay for itself in the future. We’d have to be crazy not to try this. It may well be too confusing from a PR perspective to run future versions of the program within SingInst, but if so we can just turn it into its own organization.
If you have concrete proposals for valuable projects that you think we’re neglecting and would like to help out with I would be happy to have a Skype chat and then put you in contact with Michael Vassar.
Yet as frequently discussed, the instrumental rationality techniques advocated here have not yet proven that they can generate significantly more successful people, in research or other areas.
I am all in favor of attempting the impossible, but do you want to attempt one impossible task (generating significantly more rational/successful people in a way never before done) as a prerequisite to another impossible task (FAI)?
Precisely. The Singularity Institute was founded due to Eliezer’s belief that trying to build FAI was the best strategy for making the world a better place. That is the goal. FAI is just a sub-goal. There is still consensus that FAI is the most promising route, but it does not seem wise to put all of our eggs in one basket. We can’t do all of the work that needs to be done within one organization and we don’t plan to try.
Through programs like Rationality Boot Camp, we expect to identify people who really care about improving the world and radically increase their chances of coming to correct conclusions about what needs to be done and then actually doing so. Not only will more highly-motivated, rational people improve the world at a much faster rate, they will also serve as checks on our sanity. I don’t expect that we are sufficiently sane at the moment to reliably solve the world’s problems and we’re really going to need to step up our game if we hope to solve FAI. This program is just the beginning. The initial investment is relatively small and, if we can actually do what we think we can, the program should pay for itself in the future. We’d have to be crazy not to try this. It may well be too confusing from a PR perspective to run future versions of the program within SingInst, but if so we can just turn it into its own organization.
If you have concrete proposals for valuable projects that you think we’re neglecting and would like to help out with I would be happy to have a Skype chat and then put you in contact with Michael Vassar.
Yet as frequently discussed, the instrumental rationality techniques advocated here have not yet proven that they can generate significantly more successful people, in research or other areas.
I am all in favor of attempting the impossible, but do you want to attempt one impossible task (generating significantly more rational/successful people in a way never before done) as a prerequisite to another impossible task (FAI)?