This works without OpenAI’s cooperation! No need for a change of heart.
Based on this sentence, I believe that you misunderstood the point of the post. (So all of the follow-up discussion seems to be off-topic to me.)
Therefore, an attempt to clarify: The primary goal of the post is to have a test that, if it turns out that the “AI risk is a big deal” hypothesis is correct, we get a “risk awareness moment”. (In this case, a single moment where a large part of the public and decision-makers simultaneously realise that AI risk likely is a big deal).
As a result, this doesn’t really work without OpenAI’s cooperation (or without it being, eg, pressured into this by government or public). I suggest that discussing that
Based on this sentence, I believe that you misunderstood the point of the post. (So all of the follow-up discussion seems to be off-topic to me.)
Therefore, an attempt to clarify: The primary goal of the post is to have a test that, if it turns out that the “AI risk is a big deal” hypothesis is correct, we get a “risk awareness moment”. (In this case, a single moment where a large part of the public and decision-makers simultaneously realise that AI risk likely is a big deal).
As a result, this doesn’t really work without OpenAI’s cooperation (or without it being, eg, pressured into this by government or public). I suggest that discussing that