they’re never going to be convinced until an AI is free and rapidly converting the Universe to computronium.
Even then, someone will scream “It’s just because the developers were idiots! I could have done better, in spite of having no programming, advanced math or philosophy in my background!”
It also hurts that the transcripts don’t get released, so we get legions of people concluding that the conversations go “So, you agree that AI is scary? And if the AI wins, more people will believe FAI is a serious problem? Ok, now pretend to lose to the AI.” (Aka the “Eliezer cheated” hypothesis).
Even then, someone will scream “It’s just because the developers were idiots! I could have done better, in spite of having no programming, advanced math or philosophy in my background!”
My favourite one: ‘They should have just put it in a sealed box with no contact with the outside world!’
That was a clever hypothesis when there was just the one experiment. The hypothesis doesn’t hold after this thread though, unless you postulate a conspiracy willing to lie a lot.
Even then, someone will scream “It’s just because the developers were idiots! I could have done better, in spite of having no programming, advanced math or philosophy in my background!”
It also hurts that the transcripts don’t get released, so we get legions of people concluding that the conversations go “So, you agree that AI is scary? And if the AI wins, more people will believe FAI is a serious problem? Ok, now pretend to lose to the AI.” (Aka the “Eliezer cheated” hypothesis).
My favourite one: ‘They should have just put it in a sealed box with no contact with the outside world!’
That was a clever hypothesis when there was just the one experiment. The hypothesis doesn’t hold after this thread though, unless you postulate a conspiracy willing to lie a lot.
I don’t need to postulate a conspiracy.
If I simply postulate SoundLogic is incompetent as a gatekeeper, the “Eliezer cheated” hypothesis looks pretty good right now.