I can see reasons for proceeding indirectly. Eliezer is 30. He thinks his powers may decline after age 40. It’s said that it takes 10 years to become expert in a subject. So if solving the problems of FAI requires modes of thought which do not come naturally, writing his book on rationality now is his one chance to find and train people appropriately.
It is also possible that he makes mistakes. Eliezer and SIAI are inadequately supported and have always been inadequately supported. People do make mistakes under such conditions. If you wish to see how seriously the mainstream of AI takes the problem of Friendliness, just search the recent announcements from MIT, about a renewed AI research effort, for the part where they talk about safety issues.
I have a suggestion: Offer to donate to SIAI if Eliezer can give you a satisfactory answer. (The terms of such a deal may need to be negotiated first.)
I can see reasons for proceeding indirectly. Eliezer is 30. He thinks his powers may decline after age 40. It’s said that it takes 10 years to become expert in a subject. So if solving the problems of FAI requires modes of thought which do not come naturally, writing his book on rationality now is his one chance to find and train people appropriately.
It is also possible that he makes mistakes. Eliezer and SIAI are inadequately supported and have always been inadequately supported. People do make mistakes under such conditions. If you wish to see how seriously the mainstream of AI takes the problem of Friendliness, just search the recent announcements from MIT, about a renewed AI research effort, for the part where they talk about safety issues.
I have a suggestion: Offer to donate to SIAI if Eliezer can give you a satisfactory answer. (The terms of such a deal may need to be negotiated first.)
Do you have a link? I can’t find anything seemingly relevant with a little searching.
Neither can I—that’s the point.
No, I mean anything about the renewed AI research effort.
It’s the MIT Mind Machine Project.