This letter seems underhanded and deliberately vague in the worst case, or best case confused.
The one concrete, non-regulatory call for action is, as far as I can tell, “Stop training GPT-5.” (I don’t know anyone at all who is training a system with more compute than GPT-4, other than OpenAI.) Why stop training GPT-5? It literally doesn’t say. Instead, it has the a long suggestive string of rhetorical questions about bad things an AI could cause, without actually accusing GPT-5 of any of them.
Which of them would GPT-5 break? Is it “Should we let machines flood our information channels with propaganda and untruth?” Probably not—that’s only an excuse to keep people with consumer GPUs from getting LLMs, given how OpenAI has been consistent about all sorts of safeguards, and how states could do this pretty easily without LLMs.
Is it “Should we automate away all the jobs, including the fulfilling ones?” That’s pretty fucking weird, because GPT-5 is still not gonna take trucking jobs.
Is it “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” That’s a hell of a prediction about GPT-5′s capabilities. -- my timelines aren’t nearly that short, and if everyone who signed this letter meant that would be very interesting, but… I think they almost certainly don’t. If that one were within the purview of GPT-5, they probably shouldn’t train it after six months!
But most of the “shoulds” are just....not likely! What is the actual crisis? Dunno! The letter neither has nor implies a clear model of that. Just that it is Urgent.
If you look at their citations, they cite both existential risk lit and the stochastic parrots paper. The latter is about how models are dumb, the former about the risks of models that are smart.
Then there’s a call for a big-ass regulatory presence then—with mostly an absence of concrete nonprocedural goals or a look at tradeoffs or anything! It looks like “The regulators should regulate everything and do good stuff, and keep bad people from doing bad stuff.” If you don’t have a model of risk you’ll could just throw a soup of lobbyists and regulators at a problem and hope that it works out, but how likely is that?
(And if you wanted to you could call for literally all of the above without stopping GPT-5. They’re just fucking unconnected. Why are they tied together in this omnibus petition? Probably so it feels like a Crisis which justifies stepping in.)
(I don’t know anyone at all who is training a system with more compute than GPT-4, other than OpenAI.)
I don’t think we heard anything about e.g. PaLM, PaLM-E, Chinchilla or Gopher before the respective papers came out? (PaLM-E might already be “more powerful” than GPT-4, depending on how you define “more powerful”, since it could act as a multimodal language model like GPT-4 and control a robot on top.) Probably several organizations are working on something better than GPT-4, they just have no reason to talk about their progress before they have something that’s ready to show.
This letter seems underhanded and deliberately vague in the worst case, or best case confused.
The one concrete, non-regulatory call for action is, as far as I can tell, “Stop training GPT-5.” (I don’t know anyone at all who is training a system with more compute than GPT-4, other than OpenAI.) Why stop training GPT-5? It literally doesn’t say. Instead, it has the a long suggestive string of rhetorical questions about bad things an AI could cause, without actually accusing GPT-5 of any of them.
Which of them would GPT-5 break? Is it “Should we let machines flood our information channels with propaganda and untruth?” Probably not—that’s only an excuse to keep people with consumer GPUs from getting LLMs, given how OpenAI has been consistent about all sorts of safeguards, and how states could do this pretty easily without LLMs.
Is it “Should we automate away all the jobs, including the fulfilling ones?” That’s pretty fucking weird, because GPT-5 is still not gonna take trucking jobs.
Is it “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” That’s a hell of a prediction about GPT-5′s capabilities. -- my timelines aren’t nearly that short, and if everyone who signed this letter meant that would be very interesting, but… I think they almost certainly don’t. If that one were within the purview of GPT-5, they probably shouldn’t train it after six months!
But most of the “shoulds” are just....not likely! What is the actual crisis? Dunno! The letter neither has nor implies a clear model of that. Just that it is Urgent.
If you look at their citations, they cite both existential risk lit and the stochastic parrots paper. The latter is about how models are dumb, the former about the risks of models that are smart.
Then there’s a call for a big-ass regulatory presence then—with mostly an absence of concrete nonprocedural goals or a look at tradeoffs or anything! It looks like “The regulators should regulate everything and do good stuff, and keep bad people from doing bad stuff.” If you don’t have a model of risk you’ll could just throw a soup of lobbyists and regulators at a problem and hope that it works out, but how likely is that?
(And if you wanted to you could call for literally all of the above without stopping GPT-5. They’re just fucking unconnected. Why are they tied together in this omnibus petition? Probably so it feels like a Crisis which justifies stepping in.)
Eeesh, just so bad.
I don’t think we heard anything about e.g. PaLM, PaLM-E, Chinchilla or Gopher before the respective papers came out? (PaLM-E might already be “more powerful” than GPT-4, depending on how you define “more powerful”, since it could act as a multimodal language model like GPT-4 and control a robot on top.) Probably several organizations are working on something better than GPT-4, they just have no reason to talk about their progress before they have something that’s ready to show.
Is this really the reason why?