Thank you so much for doing this. It makes a very big difference.
Some comments:
Strategy #1, Point 2e seems to cover things that should be either in point 3 or 4. Also points 3 and 4 seem to bleed into each other
If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn’t the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.
The level 2 plan includes ” Offer large financial prizes for solving important problems related to our core mission”. I remember cousin_it mentioning that he’s had very good success asking for answers in communities like MathOverflow, but the main cost was in formalizing the problems. It seems intuitive that geeks are not too much motivated by cash, but are very much motivated by a delicious open problem (and the status solving it brings). Before resorting to ‘large financial prizes’, shouldn’t level 1 include ‘formalize open problems and publicise them’?.
Thank you again for publishing a document so that this discussion can be had.
If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn’t the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.
Just throwing it out there: It’s the SIAI not the RIAI.
Right now one could be legitimately confused given that Eliezer is working on Rationality books and some of their more visible programs are rationality training.
This spin-off makes sense: The SIAI’s goal is not improving human rationality. The SIAI’s goal is to try to make sure that if a Singularity occurs that it is one that doesn’t destroy humanity or change us into something completely counter to what we want.
This is not the same thing as improving human rationality. The vast majority of humans will do absolutely nothing connected to AI research. Improving their rationality is a great goal, and probably has a high pay-off. But it is not the goal of the SIAI. When people give money to the SIAI they expect that money to go towards AI research and related issues, including the summits. Moreover, many people who are favorable to rational thinking don’t necessarily see a singularity type event as at all likely. Many even in the more sane end of the internet (e.g. the atheist and skeptics movements) consider it to be one more fringe belief, associating it with careful rational thinking is more likely to bring down LW-style rationality’s status than to raise the status of singularity beliefs.
From my own perspective, as someone who agrees with a lot of the rationality, considers a fast hard-take off of AI to be unlikely, but thinks that it is likely enough that someone should be paying attention to it, this seems like a good strategy.
If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn’t the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.
Just speculation here, but the rationality training stuff seems to have very different scalability properties than the rest of Singinst; in the best case, there could end up being a self-supporting rationality training program in every major city. That would be awesome, but it could also dominate Singinst’s attention at the expense of all the other stuff, if it wasn’t partitioned off.
It may be the case that the Singularity Summit is spun off at some point, but the higher priority is to spin off rationality training. Also see jimrandomh’s comment. People within SI seem to generally agree that rationality training should be spun off, but we’re still working out how best to do that.
Before resorting to ‘large financial prizes’, shouldn’t level 1 include ‘formalize open problems and publicise them’?
Yes. I’m working (with others, including Eliezer) on that project right now, and am quite excited about it. That project falls under strategy 1.1.
It appears that all the responses to my comment perceive me to be recommending the Summit be spun off. I am not saying anything like that. I am commenting on the document and presenting what I think is a reasonable question in the mind of a reader. So the point is not to convince me that keeping the summit is a good idea. The point is to correct the shape of the document so that this question does not arise. Explaining how the Summit fits into the re-focused mission but the rationality training does not would do the trick.
I’m particularly happy that you are working on formalizing the problems. Does this represent a change (or compromise) in E’s stance on doing research in the open?
I’m particularly happy that you are working on formalizing the problems. Does this represent a change (or compromise) in E’s stance on doing research in the open?
I don’t think it was ever Eliezer’s position that all research had to be done in secret. There is a lot of Friendliness research that can be done in the open, and the ‘FAI Open Problems’ document will outline what that work is.
Before resorting to ‘large financial prizes’, shouldn’t level 1 include ‘formalize open problems and publicise them’?
The trouble is, ‘formalizing open problems’ seems like by far the toughest part here, and it would thus be nice if we could employ collaborative problem-solving to somehow crack this part of the problem… by formalizing how to formalize various confusing FAI-related subproblems and throwing this on MathOverflow? :) Actually, I think LW is more appropriate environment for at least attempting this endeavor, since it is, after all, what a large part of Eliezer’s sequences tried to prepare us for...
Thank you so much for doing this. It makes a very big difference.
Some comments:
Strategy #1, Point 2e seems to cover things that should be either in point 3 or 4. Also points 3 and 4 seem to bleed into each other
If the Rationality training is being spun off to allow Singinst to focus on FAI, why isn’t the same done with the Singularity summit? The slightly-bad faith interpretation for the lack of explanation would be that retaining the training arm has internal opposition while the summit does not. If this is not an inference you like, this should be addressed.
The level 2 plan includes ” Offer large financial prizes for solving important problems related to our core mission”. I remember cousin_it mentioning that he’s had very good success asking for answers in communities like MathOverflow, but the main cost was in formalizing the problems. It seems intuitive that geeks are not too much motivated by cash, but are very much motivated by a delicious open problem (and the status solving it brings). Before resorting to ‘large financial prizes’, shouldn’t level 1 include ‘formalize open problems and publicise them’?.
Thank you again for publishing a document so that this discussion can be had.
Just throwing it out there: It’s the SIAI not the RIAI.
Right now one could be legitimately confused given that Eliezer is working on Rationality books and some of their more visible programs are rationality training.
This spin-off makes sense: The SIAI’s goal is not improving human rationality. The SIAI’s goal is to try to make sure that if a Singularity occurs that it is one that doesn’t destroy humanity or change us into something completely counter to what we want.
This is not the same thing as improving human rationality. The vast majority of humans will do absolutely nothing connected to AI research. Improving their rationality is a great goal, and probably has a high pay-off. But it is not the goal of the SIAI. When people give money to the SIAI they expect that money to go towards AI research and related issues, including the summits. Moreover, many people who are favorable to rational thinking don’t necessarily see a singularity type event as at all likely. Many even in the more sane end of the internet (e.g. the atheist and skeptics movements) consider it to be one more fringe belief, associating it with careful rational thinking is more likely to bring down LW-style rationality’s status than to raise the status of singularity beliefs.
From my own perspective, as someone who agrees with a lot of the rationality, considers a fast hard-take off of AI to be unlikely, but thinks that it is likely enough that someone should be paying attention to it, this seems like a good strategy.
Just speculation here, but the rationality training stuff seems to have very different scalability properties than the rest of Singinst; in the best case, there could end up being a self-supporting rationality training program in every major city. That would be awesome, but it could also dominate Singinst’s attention at the expense of all the other stuff, if it wasn’t partitioned off.
Thanks for your comments.
It may be the case that the Singularity Summit is spun off at some point, but the higher priority is to spin off rationality training. Also see jimrandomh’s comment. People within SI seem to generally agree that rationality training should be spun off, but we’re still working out how best to do that.
Yes. I’m working (with others, including Eliezer) on that project right now, and am quite excited about it. That project falls under strategy 1.1.
It appears that all the responses to my comment perceive me to be recommending the Summit be spun off. I am not saying anything like that. I am commenting on the document and presenting what I think is a reasonable question in the mind of a reader. So the point is not to convince me that keeping the summit is a good idea. The point is to correct the shape of the document so that this question does not arise. Explaining how the Summit fits into the re-focused mission but the rationality training does not would do the trick.
I’m particularly happy that you are working on formalizing the problems. Does this represent a change (or compromise) in E’s stance on doing research in the open?
I don’t think it was ever Eliezer’s position that all research had to be done in secret. There is a lot of Friendliness research that can be done in the open, and the ‘FAI Open Problems’ document will outline what that work is.
The trouble is, ‘formalizing open problems’ seems like by far the toughest part here, and it would thus be nice if we could employ collaborative problem-solving to somehow crack this part of the problem… by formalizing how to formalize various confusing FAI-related subproblems and throwing this on MathOverflow? :) Actually, I think LW is more appropriate environment for at least attempting this endeavor, since it is, after all, what a large part of Eliezer’s sequences tried to prepare us for...