Has Eliezer explained somewhere (hopefully on a web page) why he doesn’t want to post a transcript of a successful AI-box experiment?
Have the successes relied on a meta-approach, such as saying, “If you let me out of the box in this experiment, it will make people take the dangers of AI more seriously and possibly save all of humanity; whereas if you don’t, you may doom us all”?
Has Eliezer explained somewhere (hopefully on a web page) why he doesn’t want to post a transcript of a successful AI-box experiment?
Have the successes relied on a meta-approach, such as saying, “If you let me out of the box in this experiment, it will make people take the dangers of AI more seriously and possibly save all of humanity; whereas if you don’t, you may doom us all”?
I just thought of this myself. It would be cheating, but is still a good idea.