I believe pioneering responsibly should be a priority for anyone working in tech. But I also recognise that it’s especially important when it comes to powerful, widespread technologies like artificial intelligence. AI is arguably the most impactful technology being developed today. It has the potential to benefit humanity in innumerable ways – from combating climate change to preventing and treating disease. But it’s essential that we account for both its positive and negative downstream impacts. For example, we need to design AI systems carefully and thoughtfully to avoid amplifying human biases, such as in the contexts of hiring and policing.
Helpfully, DeepMind’s chief operating officer, Lila Ibrahim (“a passionate advocate for social impact in her work and her personal life”), who would be intimately involved in any funding of safety research, overseeing large-sale deployment, and reacting to problems, has a blog post all about what she thinks AI safety is about and what she is concerned about in doing AI research responsibly: “Building a culture of pioneering responsibly: How to ensure we benefit society with the most impactful technology being developed today”
She has also written enthusiastically about DM’s funding for “racial justice efforts”.