At this time, my opinion stands with Jasen and SIAI and not with Mitchell. (But I haven’t downvoted this post; it’s worth discussing and is well-written.)
I think JoshuaFox hit several of the right points. Here are my main reasons for thinking the rationality boot camp is a good idea:
1. This is training to win at life.
Most people who go through the summer program will not end up working on AI directly. But this kind of bootcamp is training for winning at life, like a bootcamp on public speaking or social networking or dating skills. The boot camp Jasen is putting together will likely be more useful to more people than what SIAI has done in the past, and more useful to more people than a summer program all about AI.
2. This boot camp recruits the right people.
If you can’t handle this kind of rigorous and specific rationality training, it’s less likely you will be able to make useful, long-term contributions to the project of Friendly AI. If you’re only up to the level of Traditional Rationality, you are not cut out to work on the single most important and difficult problem humanity has to face.
3. FAI doesn’t allow experimentation. You have to be optimally rational.
SIAI must take steps to ensure that it’s people are about as rational as humans are capable of being. One little bias could fuck the whole planet. There is no do-over after your experiment failed.
All of these seem to me to ignore Mitchell’s claim that:
as described, there’s no indication that graduates of the Boot Camp will then go on to tackle conceptual problems of AI design or tactics for the Singularity
At this time, my opinion stands with Jasen and SIAI and not with Mitchell. (But I haven’t downvoted this post; it’s worth discussing and is well-written.)
I think JoshuaFox hit several of the right points. Here are my main reasons for thinking the rationality boot camp is a good idea:
1. This is training to win at life.
Most people who go through the summer program will not end up working on AI directly. But this kind of bootcamp is training for winning at life, like a bootcamp on public speaking or social networking or dating skills. The boot camp Jasen is putting together will likely be more useful to more people than what SIAI has done in the past, and more useful to more people than a summer program all about AI.
2. This boot camp recruits the right people.
If you can’t handle this kind of rigorous and specific rationality training, it’s less likely you will be able to make useful, long-term contributions to the project of Friendly AI. If you’re only up to the level of Traditional Rationality, you are not cut out to work on the single most important and difficult problem humanity has to face.
3. FAI doesn’t allow experimentation. You have to be optimally rational.
SIAI must take steps to ensure that it’s people are about as rational as humans are capable of being. One little bias could fuck the whole planet. There is no do-over after your experiment failed.
All of these seem to me to ignore Mitchell’s claim that: