Hmm, so a transcript of such an experiment was finally shown, despite eliezer’s rule against it. Well, having seen a failed attempt to win as the AI, I do not believe the information given causes the AI’s future attempts to be futile as [eliezer] predicted, nor that it would if this were a successful AI attempt. Any thoughts? Truly a transhuman AI’s ability to win wouldn’t be compromised by someone having seen a transhuman AI win before, could it?
Hmm, so a transcript of such an experiment was finally shown, despite eliezer’s rule against it. Well, having seen a failed attempt to win as the AI, I do not believe the information given causes the AI’s future attempts to be futile as [eliezer] predicted, nor that it would if this were a successful AI attempt. Any thoughts? Truly a transhuman AI’s ability to win wouldn’t be compromised by someone having seen a transhuman AI win before, could it?