Google’s DeepMind has 4 pages of blog posts about their fast-moving research to build artificial intelligence that can solve problems on its own. In contrast, they have only 2 posts total about the ethics and safeguards for doing so. We can’t necessarily rely on the top AI labs in the world, to think of everything that could go wrong with their increasingly-powerful systems. New forms of oversight, nimbler than government regulation or IRBs, need to be invented to keep this powerful technology aligned with human goals. (Policymakers)
I believe pioneering responsibly should be a priority for anyone working in tech. But I also recognise that it’s especially important when it comes to powerful, widespread technologies like artificial intelligence. AI is arguably the most impactful technology being developed today. It has the potential to benefit humanity in innumerable ways – from combating climate change to preventing and treating disease. But it’s essential that we account for both its positive and negative downstream impacts. For example, we need to design AI systems carefully and thoughtfully to avoid amplifying human biases, such as in the contexts of hiring and policing.
Google’s DeepMind has 4 pages of blog posts about their fast-moving research to build artificial intelligence that can solve problems on its own. In contrast, they have only 2 posts total about the ethics and safeguards for doing so. We can’t necessarily rely on the top AI labs in the world, to think of everything that could go wrong with their increasingly-powerful systems. New forms of oversight, nimbler than government regulation or IRBs, need to be invented to keep this powerful technology aligned with human goals. (Policymakers)
Helpfully, DeepMind’s chief operating officer, Lila Ibrahim (“a passionate advocate for social impact in her work and her personal life”), who would be intimately involved in any funding of safety research, overseeing large-sale deployment, and reacting to problems, has a blog post all about what she thinks AI safety is about and what she is concerned about in doing AI research responsibly: “Building a culture of pioneering responsibly: How to ensure we benefit society with the most impactful technology being developed today”
She has also written enthusiastically about DM’s funding for “racial justice efforts”.