I have been using the same images from Tim’s post for years (literally since it first came out) to explain the basics of AI alignment to the uninitiated. It has worked wonders. On the other hand, I have shared the entire post many times and no one has ever read it.
I would imagine that a collaboration between Eliezer and Tim explaining the basics of alignment would strike a chord with many people out there. People are generally more open to discussing this kind of graphical explanation than reading a random post for 2 hours.
Let’s make some assumptions about Mark Zuckerberg:
Zuckerberg has above-average intelligence.
He has a deep interest in new technologies.
He is invested in a positive future for humanity.
He has some understanding of the risks associated with the development of superintelligent AI systems.
Given these assumptions, it’s reasonable to expect Zuckerberg to be concerned about AI safety and its potential impact on society.
Now the question that it’s been bugging me since some weeks after reading LeCun’s arguments:
Could it be that Zuckerberg is not informed about his subordinate views?
If so, someone should really make pressure for this to happen and probably even replace LeCun as Chief Scientist at Meta AI.