Let’s make some assumptions about Mark Zuckerberg:
Zuckerberg has above-average intelligence.
He has a deep interest in new technologies.
He is invested in a positive future for humanity.
He has some understanding of the risks associated with the development of superintelligent AI systems.
Given these assumptions, it’s reasonable to expect Zuckerberg to be concerned about AI safety and its potential impact on society.
Now the question that it’s been bugging me since some weeks after reading LeCun’s arguments:
Could it be that Zuckerberg is not informed about his subordinate views?
If so, someone should really make pressure for this to happen and probably even replace LeCun as Chief Scientist at Meta AI.
4 seems like an unreasonable assumption. From Zuck’s perspective his chief AI scientist, whom he trusts, shares milder views of AI risks than some unknown people with no credentials (again from his perspective).
Let’s make some assumptions about Mark Zuckerberg:
Zuckerberg has above-average intelligence.
He has a deep interest in new technologies.
He is invested in a positive future for humanity.
He has some understanding of the risks associated with the development of superintelligent AI systems.
Given these assumptions, it’s reasonable to expect Zuckerberg to be concerned about AI safety and its potential impact on society.
Now the question that it’s been bugging me since some weeks after reading LeCun’s arguments:
Could it be that Zuckerberg is not informed about his subordinate views?
If so, someone should really make pressure for this to happen and probably even replace LeCun as Chief Scientist at Meta AI.
4 seems like an unreasonable assumption. From Zuck’s perspective his chief AI scientist, whom he trusts, shares milder views of AI risks than some unknown people with no credentials (again from his perspective).