I can imagine this growing into the default reference that people use when talking about whether labs are behaving responsibly.
I hope that this resource is used as a measure of relative responsibleness, and this doesn’t get mixed up with absolute responsibleness. My understanding is that the resource essentially says, “here’s some things that would be good– let’s see how the labs compare on each dimension.” The resource is not saying “if a lab gets a score above X% on each metric, then we are quite confident that the lab will not cause an existential catastrophe.”
Moreover, my understanding is that the resource is not taking a position on whether or not it is “responsible”– in some absolute sense– for a lab to be scaling toward AGI in our current world. I see the resource as saying “conditional on a lab scaling toward AGI, are they doing so in a way that is relatively more/less responsible compared to the others that are scaling toward AGI.”
This might be a pedantic point, but I think it’s an important one to emphasize– a lab can score in 1st place and still present a risk to humanity that reasonable people would still deem unacceptable & irresponsible (or put differently, a lab can score in 1st place and still produce a catastrophe).
I hope that this resource is used as a measure of relative responsibleness, and this doesn’t get mixed up with absolute responsibleness. My understanding is that the resource essentially says, “here’s some things that would be good– let’s see how the labs compare on each dimension.” The resource is not saying “if a lab gets a score above X% on each metric, then we are quite confident that the lab will not cause an existential catastrophe.”
Moreover, my understanding is that the resource is not taking a position on whether or not it is “responsible”– in some absolute sense– for a lab to be scaling toward AGI in our current world. I see the resource as saying “conditional on a lab scaling toward AGI, are they doing so in a way that is relatively more/less responsible compared to the others that are scaling toward AGI.”
This might be a pedantic point, but I think it’s an important one to emphasize– a lab can score in 1st place and still present a risk to humanity that reasonable people would still deem unacceptable & irresponsible (or put differently, a lab can score in 1st place and still produce a catastrophe).