Eliezer Yudkowsky has been let out as the AI at least twice[1][2] but both tests were precommitted to secrecy.
I’d be surprised if he’s the only one who has ever won as the AI, I think it more likely that this is a visibility issue (e.g. despite him being a very-high profile person in the AI safety memetic culture, you weren’t aware that Eliezer had won as the AI when you made your comment) and while I’m not aware of others who have won as the AI, I would place my bet on that being merely a lack of knowledge on my part, and not because no one else actually has.
this is further compounded by the fact that some (many?) games are conducted under a pre-commitment to secrecy, and the results that get the most discussion (and therefore, most visibility) are the ones with full transcripts for third-parties to pick through.
forgive me if I misunderstand you, but you seem to be implying that, on two separate occasions, two different people were (induced to?) lie about the outcome of an experiment.
So you’re implying that either Eliezer is dishonest, or both of his opponents were dishonest on his behalf. And you find this more likely than an actual AI win in the game?
EY’s handling of the basilisk issue can be called many things (clumsy, rushed, unwise, badly thought out, counterproductive, poster child for the Streisand effect), but it was not deceitful.
Never still seems extraordinary. I find myself entertaining hypotheses like “maybe the AI has never actually won”.
Eliezer Yudkowsky has been let out as the AI at least twice[1][2] but both tests were precommitted to secrecy.
I’d be surprised if he’s the only one who has ever won as the AI, I think it more likely that this is a visibility issue (e.g. despite him being a very-high profile person in the AI safety memetic culture, you weren’t aware that Eliezer had won as the AI when you made your comment) and while I’m not aware of others who have won as the AI, I would place my bet on that being merely a lack of knowledge on my part, and not because no one else actually has.
this is further compounded by the fact that some (many?) games are conducted under a pre-commitment to secrecy, and the results that get the most discussion (and therefore, most visibility) are the ones with full transcripts for third-parties to pick through.
I was already aware of those public statements. I remain rather less than perfectly confident that Yudkowsky actually won.
forgive me if I misunderstand you, but you seem to be implying that, on two separate occasions, two different people were (induced to?) lie about the outcome of an experiment.
So you’re implying that either Eliezer is dishonest, or both of his opponents were dishonest on his behalf. And you find this more likely than an actual AI win in the game?
We already know from the Basilisk that Eliezer is willing to deceive the community.
EY’s handling of the basilisk issue can be called many things (clumsy, rushed, unwise, badly thought out, counterproductive, poster child for the Streisand effect), but it was not deceitful.