1. I really like this blog, and have been lurking here for a few months.
2. Having said that, Eliezer’s carry-on in respect of the AI-boxing issue does him no credit. His views on the feasibility of AI-boxing are only an opinion, he has managed to give it weight in some circles with his 2 heavily promoted “victories” (the 3 “losses” are mentioned far less frequently). By not publishing the transcripts, no lessons of value are taught (“Wow, that Eliezer is smart” is not worth repeating, we already know that). I think the real reason the transcripts are still secret is simply that they are plain boring and contain no insights of value.
My opinion, for what it is worth, is that AI-boxing should not be discarded. The AI-boxing approach does not need to be perfect to be useful, all it needs to be is better than alternative approaches. AI-boxing has one big advantage over “FAI” approach: it is conceptually simple. As such, it seems possible to more or less rigorously analyse the failure modes and take precautions. Can the same be said of FAI?
3. For a learning experience, I would like to be the AI in the suggested experiment, $10 even stakes, transcript to be published. I am only available time is 9-11 pm Singapore time… e-mail milanoman at yahoo dot com to set up.
1. I really like this blog, and have been lurking here for a few months.
2. Having said that, Eliezer’s carry-on in respect of the AI-boxing issue does him no credit. His views on the feasibility of AI-boxing are only an opinion, he has managed to give it weight in some circles with his 2 heavily promoted “victories” (the 3 “losses” are mentioned far less frequently). By not publishing the transcripts, no lessons of value are taught (“Wow, that Eliezer is smart” is not worth repeating, we already know that). I think the real reason the transcripts are still secret is simply that they are plain boring and contain no insights of value.
My opinion, for what it is worth, is that AI-boxing should not be discarded. The AI-boxing approach does not need to be perfect to be useful, all it needs to be is better than alternative approaches. AI-boxing has one big advantage over “FAI” approach: it is conceptually simple. As such, it seems possible to more or less rigorously analyse the failure modes and take precautions. Can the same be said of FAI?
3. For a learning experience, I would like to be the AI in the suggested experiment, $10 even stakes, transcript to be published. I am only available time is 9-11 pm Singapore time… e-mail milanoman at yahoo dot com to set up.
D. Alex