Maybe the underlying reason why we are interpreting the evidence in different ways is because we are holding OpenAI to different standards:
Compared to a standard company, having a feedback button is evidence of competence. Quickly incorporating training data is also a positive update, as is having an explicit graphical representation of illegitimate questions.
I am comparing OpenAI to the extremely high standard of “Being able to solve the alignment problem”. Against this standard, having a feedback button is absolutely expected, and even things like Eliezers suggestion (publishing hashes of your gambits) should be obvious to companies competent enough to have a chance of solving the alignment problem.
It’s important to be able to distinguish factual questions from questions about judgments. “Did the OpenAI release happen the way OpenAI expected?” is a factual question that has nothing to do with the question of what standards we should have for OpenAI.
If you get the factual questions wrong it’s very easy for people within OpenAI to easily dismiss your arguments.
Maybe the underlying reason why we are interpreting the evidence in different ways is because we are holding OpenAI to different standards:
Compared to a standard company, having a feedback button is evidence of competence. Quickly incorporating training data is also a positive update, as is having an explicit graphical representation of illegitimate questions.
I am comparing OpenAI to the extremely high standard of “Being able to solve the alignment problem”. Against this standard, having a feedback button is absolutely expected, and even things like Eliezers suggestion (publishing hashes of your gambits) should be obvious to companies competent enough to have a chance of solving the alignment problem.
It’s important to be able to distinguish factual questions from questions about judgments. “Did the OpenAI release happen the way OpenAI expected?” is a factual question that has nothing to do with the question of what standards we should have for OpenAI.
If you get the factual questions wrong it’s very easy for people within OpenAI to easily dismiss your arguments.
I fully agree that it is a factual question, and OpenAI could easily shed light on the circumstances around the launch if they chose to do so.