It seems like there’s a lot of negative comments about this letter. Even if it does not go through, it seems very net positive for the reason that it makes explicit an expert position against large language model development due to safety concerns. There’s several major effects of this, as it enables scientists, lobbyists, politicians and journalists to refer to this petition to validate their potential work on the risks of AI, it provides a concrete action step towards limiting AGI development, and it incentivizes others to think in the same vein about concrete solutions.
I’ve tried to formulate a few responses to the criticisms raised:
“6 months isn’t enough to develop the safety techniques they detail”: Besides it being at least 6 months, the proposals seem relatively reasonable within something as farsighted as this letter. Shoot for the moon and you might hit the sky, but this time the sky is actually happening and work on many of their proposals is already underway. See e.g. EU AI Act, funding for AI research, concrete auditing work and safety evaluation on models. Several organizations are also working on certification and the scientific work towards watermarking is sort of done? There’s also great arguments for ensuring this since right now, we are at the whim of OpenAI management on the safety front.
“It feels rushed”: It might have benefitted from a few reformulations but it does seem alright?
“OpenAI needs to be at the forefront”: Besides others clearly lagging behind already, what we need are insurances that these systems go well, not at the behest of one person. There’s also a lot of trust in OpenAI management and however warranted that is, it is still a fully controlled monopoly on our future. If we don’t ensure safety, this just seems too optimistic (see also differences between public interview for-profit sama and online sama).
“It has a negative impact on capabilities researchers”: This seems to be an issue from <2020 and some European academia. If public figures like Yoshua cannot change the conversation, then who should? Should we just lean back and hope that they all sort of realize it by themselves? Additionally, the industry researchers from DM and OpenAI I’ve talked with generally seem to agree that alignment is very important, especially as their management is clearly taking the side of safety.
“The letter signatures are not validated properly”: Yeah, this seems like a miss, though as long as the top 40 names are validated, the negative impacts should be relatively controlled.
All in good faith of course; it’s a contentious issue but this letter seems generally positive to me.
Just repeating from my other comments: my main issue is the broad proposal of “let’s get governments involved” that appears to not be aware of all the horrible and corrupt things governments do by-default when they get involved (cf. Covid), nor proposes any ways to avoid lots of dysfunction.
“The letter signatures are not validated properly”: Yeah, this seems like a miss, though as long as the top 40 names are validated, the negative impacts should be relatively controlled.
The top 40 names were definitely not validated at the time of posting. (or even now?)
Has anyone representing the organization stated on the record that the remaining signatories are legitimate?
It seems like there’s a lot of negative comments about this letter. Even if it does not go through, it seems very net positive for the reason that it makes explicit an expert position against large language model development due to safety concerns. There’s several major effects of this, as it enables scientists, lobbyists, politicians and journalists to refer to this petition to validate their potential work on the risks of AI, it provides a concrete action step towards limiting AGI development, and it incentivizes others to think in the same vein about concrete solutions.
I’ve tried to formulate a few responses to the criticisms raised:
“6 months isn’t enough to develop the safety techniques they detail”: Besides it being at least 6 months, the proposals seem relatively reasonable within something as farsighted as this letter. Shoot for the moon and you might hit the sky, but this time the sky is actually happening and work on many of their proposals is already underway. See e.g. EU AI Act, funding for AI research, concrete auditing work and safety evaluation on models. Several organizations are also working on certification and the scientific work towards watermarking is sort of done? There’s also great arguments for ensuring this since right now, we are at the whim of OpenAI management on the safety front.
“It feels rushed”: It might have benefitted from a few reformulations but it does seem alright?
“OpenAI needs to be at the forefront”: Besides others clearly lagging behind already, what we need are insurances that these systems go well, not at the behest of one person. There’s also a lot of trust in OpenAI management and however warranted that is, it is still a fully controlled monopoly on our future. If we don’t ensure safety, this just seems too optimistic (see also differences between public interview for-profit sama and online sama).
“It has a negative impact on capabilities researchers”: This seems to be an issue from <2020 and some European academia. If public figures like Yoshua cannot change the conversation, then who should? Should we just lean back and hope that they all sort of realize it by themselves? Additionally, the industry researchers from DM and OpenAI I’ve talked with generally seem to agree that alignment is very important, especially as their management is clearly taking the side of safety.
“The letter signatures are not validated properly”: Yeah, this seems like a miss, though as long as the top 40 names are validated, the negative impacts should be relatively controlled.
All in good faith of course; it’s a contentious issue but this letter seems generally positive to me.
Just repeating from my other comments: my main issue is the broad proposal of “let’s get governments involved” that appears to not be aware of all the horrible and corrupt things governments do by-default when they get involved (cf. Covid), nor proposes any ways to avoid lots of dysfunction.
Just compare how collaborative governments solving global warming versus Gates Foundation solving polio.
The top 40 names were definitely not validated at the time of posting. (or even now?)
Has anyone representing the organization stated on the record that the remaining signatories are legitimate?