Cross-posted from the EA Forum
The New York Times: Sundar Pichai, CEO of Alphabet and Google, is trying to speed up the release of AI technology by taking on more risk.
Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times.
The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.
The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.
This change is in response to OpenAI’s public release of ChatGPT. It is evidence that the race between Google/DeepMind and Microsoft/OpenAI is eroding ethics and safety.
Demis Hassabis, CEO of DeepMind, urged caution in his recent interview in Time:
He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.
“When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says.
“Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.”
Worse still, Hassabis points out, we are the guinea pigs.
Alphabet/Google is trying to accelerate a technology that its own subsidiary says is powerful and dangerous.
Just quoting from the NYT article:
The way I’m reading this, Google is behind on RLHF, and worried about getting blasted by EU fines. Honestly, those aren’t humanity-dooming concerns, and it’s not a huge deal if they brush them off. However, you’re right that this is exactly the race dynamic AI safety has warned about for years. It would be good if the labs could reach some kind of agreement on exactly what kinds of requirements have to be met before we reach “actually dangerous line do not rush past”. Something like OpenAI’s Charter:
Maybe there ought to be a push for a multilateral agreement of this sort sooner rather than later? Would be good to do before trust starts breaking down.
It’s somewhat surprising to me the way this is shaking out. I would expect DeepMind and OpenAI’s AGI research to be competing with one another*. But here it looks like Google is the engine of competition, less motivated by any future focused ideas about AGI more just by the fact that their core search/ad business model appears to be threatened by OpenAI’s AGI research.
*And hopefully cooperating with one another too.
(Cross-posted this comment from the EA Forum)