Is it even necessary to run this experiment anymore? Elezier and multiple other people have tried it and the thesis has been proved.
Further, the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant. However, like all glaringly obvious things, there are inevitably going to be some naysayers. Elezier concieved of the experiment as a way to shut them up. Well, it didn’t work, because they’re never going to be convinced until an AI is free and rapidly converting the Universe to computronium.
I can understand doing the experiment for fun, but to prove a point? Not necessary.
they’re never going to be convinced until an AI is free and rapidly converting the Universe to computronium.
Even then, someone will scream “It’s just because the developers were idiots! I could have done better, in spite of having no programming, advanced math or philosophy in my background!”
It also hurts that the transcripts don’t get released, so we get legions of people concluding that the conversations go “So, you agree that AI is scary? And if the AI wins, more people will believe FAI is a serious problem? Ok, now pretend to lose to the AI.” (Aka the “Eliezer cheated” hypothesis).
Even then, someone will scream “It’s just because the developers were idiots! I could have done better, in spite of having no programming, advanced math or philosophy in my background!”
My favourite one: ‘They should have just put it in a sealed box with no contact with the outside world!’
That was a clever hypothesis when there was just the one experiment. The hypothesis doesn’t hold after this thread though, unless you postulate a conspiracy willing to lie a lot.
Is it even necessary to run this experiment anymore? Elezier and multiple other people have tried it and the thesis has been proved.
Further, the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant. However, like all glaringly obvious things, there are inevitably going to be some naysayers. Elezier concieved of the experiment as a way to shut them up. Well, it didn’t work, because they’re never going to be convinced until an AI is free and rapidly converting the Universe to computronium.
I can understand doing the experiment for fun, but to prove a point? Not necessary.
Even then, someone will scream “It’s just because the developers were idiots! I could have done better, in spite of having no programming, advanced math or philosophy in my background!”
It also hurts that the transcripts don’t get released, so we get legions of people concluding that the conversations go “So, you agree that AI is scary? And if the AI wins, more people will believe FAI is a serious problem? Ok, now pretend to lose to the AI.” (Aka the “Eliezer cheated” hypothesis).
My favourite one: ‘They should have just put it in a sealed box with no contact with the outside world!’
That was a clever hypothesis when there was just the one experiment. The hypothesis doesn’t hold after this thread though, unless you postulate a conspiracy willing to lie a lot.
I don’t need to postulate a conspiracy.
If I simply postulate SoundLogic is incompetent as a gatekeeper, the “Eliezer cheated” hypothesis looks pretty good right now.
I don’t see that it was obvious, given that none of the AI players are actually superintelligent.
If the finding was that humans pretending to be AIs failed then this would weaken the point. As it happens the reverse is true.
The claim is that it was obvious in advance. The whole reason AI-boxing is interesting is that the AI successes were unexpected, in advance.