I’m not sure there’s an answer to this question yet, but I have a concern as a model evaluator. The current standard for showing a model is capable of harm is proving that it provides uplift to some category of bad actors relative to resources not including the model in question.
Fine so far. Currently this is taken to mean “resources like web search.” What happens if the definition shifts to include “resources like web search and models not previously classified as dangerous”? That seems to me to give a moving goalpost to the threshold of danger in a problematic way. I’m already pretty confident that some published open-weight models do give at least a little bit of uplift to bad actors.
How much uplift must be shown to trigger the danger clause?
By what standards should we measure and report on this hazard uplift?
Seems to me like our failure to pin down definitions of this is already letting bad stuff sneak by us unremarked. If we then start moving the goalpost to be “must be measurably worse than anything which has previously snuck by” then it seems like we won’t do any effective enforcement at all.
I’m not sure there’s an answer to this question yet, but I have a concern as a model evaluator. The current standard for showing a model is capable of harm is proving that it provides uplift to some category of bad actors relative to resources not including the model in question.
Fine so far. Currently this is taken to mean “resources like web search.” What happens if the definition shifts to include “resources like web search and models not previously classified as dangerous”? That seems to me to give a moving goalpost to the threshold of danger in a problematic way. I’m already pretty confident that some published open-weight models do give at least a little bit of uplift to bad actors.
How much uplift must be shown to trigger the danger clause?
By what standards should we measure and report on this hazard uplift?
Seems to me like our failure to pin down definitions of this is already letting bad stuff sneak by us unremarked. If we then start moving the goalpost to be “must be measurably worse than anything which has previously snuck by” then it seems like we won’t do any effective enforcement at all.