@Phil_Goetz: Have the successes relied on a meta-approach, such as saying, “If you let me out of the box in this experiment, it will make people take the dangers of AI more seriously and possibly save all of humanity; whereas if you don’t, you may doom us all”?
That was basically what I suggested in the previous topic, but at least one participant denied that Eliezer_Yudkowsky did that, saying it’s a cheap trick, while some non-participants said it meets the spirit and letter of the rules.
It would be nice if Eliezer himself would say whether he used meta-arguments. “Yes” or “no” would suffice. Eliezer?
That was basically what I suggested in the previous topic, but at least one participant denied that Eliezer_Yudkowsky did that, saying it’s a cheap trick, while some non-participants said it meets the spirit and letter of the rules.
It would be nice if Eliezer himself would say whether he used meta-arguments. “Yes” or “no” would suffice. Eliezer?