My main problem with OpenAI is that it’s one thing for them to not be focused on AI alignment, but are they even really focused on AI “safety” even in the loose sense of the word? Most of their published research has to do with tweaks and improvements to deep learning techniques that enhance their performance but do not really aid our theoretical understanding of them. (Which makes it pretty much the same as Google Brain, FAIR, and DeepMind in that regard). It even turned out that Ian Goodfellow, the discoverer of GANs and the primary researcher on adversarial attacks on deep learning systems left OpenAI and went back to Google because it turned out Google researchers were more interested than OpenAI in working on deep learning security issues...
On the $30 million grant from Open Philanthropy: I’ve seen it discussed on HackerNews and Reddit but not much here, and it seems like there’s plenty of confusion about what’s going on. After all it is quite a large amount, but OpenAI seems like it’s quite well funded already. So the obvious question people have is, is this a ploy for the AI risk people to gain more control over OpenAI’s research direction? And one thing I’m worried about is that there could be plenty of push-back on that, because it was such a bold move and the reasons given by Open Philanthropy for the grant would not indicate they were doing as such. And it seems there’s quite a lot of hostility towards AI safety research in general.
Yes, I left OpenAI at the end of February and returned to Google Brain.
I enjoyed my time at OpenAI and am proud of the work my OpenAI colleagues and I accomplished. I returned to Google Brain because as time went on I found that my research focus on adversarial examples and related technologies like differential privacy saw me collaborate predominantly with colleagues at Google.
AI alignment isn’t really OpenAI’s primary mission. They’re seeking to democratize access to AI technology, by developing AI technologies in the open (on Github, etc.) with permissive licenses. AI alignment is sortof a side research area that they are committing a small amount of time and resources to.
They’re buying Holden a seat on the board in order to exercise unspecified influence over OpenAI. This is pretty clear from their grant writeup. I plan to write a bit about this soon.
Um. I talked with him in person and asked him about this. He said that OpenAI has a lot of RL researchers and not many GAN researchers. The GAN researchers who he collaborated with on a day-to-day basis via video chat were all at Google Brain, so he went back. It was a logistics decision, not a philosophical one.
My main problem with OpenAI is that it’s one thing for them to not be focused on AI alignment, but are they even really focused on AI “safety” even in the loose sense of the word? Most of their published research has to do with tweaks and improvements to deep learning techniques that enhance their performance but do not really aid our theoretical understanding of them. (Which makes it pretty much the same as Google Brain, FAIR, and DeepMind in that regard). It even turned out that Ian Goodfellow, the discoverer of GANs and the primary researcher on adversarial attacks on deep learning systems left OpenAI and went back to Google because it turned out Google researchers were more interested than OpenAI in working on deep learning security issues...
On the $30 million grant from Open Philanthropy: I’ve seen it discussed on HackerNews and Reddit but not much here, and it seems like there’s plenty of confusion about what’s going on. After all it is quite a large amount, but OpenAI seems like it’s quite well funded already. So the obvious question people have is, is this a ploy for the AI risk people to gain more control over OpenAI’s research direction? And one thing I’m worried about is that there could be plenty of push-back on that, because it was such a bold move and the reasons given by Open Philanthropy for the grant would not indicate they were doing as such. And it seems there’s quite a lot of hostility towards AI safety research in general.
The linked quote from Ian Goodfellow:
AI alignment isn’t really OpenAI’s primary mission. They’re seeking to democratize access to AI technology, by developing AI technologies in the open (on Github, etc.) with permissive licenses. AI alignment is sortof a side research area that they are committing a small amount of time and resources to.
It says right on OpenAI’s about me page:
That as stated looks like AI alignment to me, although I agree with you that in practice they are doing exactly what you said.
They’re buying Holden a seat on the board in order to exercise unspecified influence over OpenAI. This is pretty clear from their grant writeup. I plan to write a bit about this soon.
Um. I talked with him in person and asked him about this. He said that OpenAI has a lot of RL researchers and not many GAN researchers. The GAN researchers who he collaborated with on a day-to-day basis via video chat were all at Google Brain, so he went back. It was a logistics decision, not a philosophical one.