One of his main steps was founding OpenAI, whose charter looks like a questionable decision now from an AI Safety standpoint (as they push capabilities of language models and reinforcement learning forward, while driving their original safety team away) and looked fishy to me even at the time (simply because more initiatives make coordination harder).
I agree that Musk takes AI risk seriously, and I understand the “try something” mentality. But I suspect he founded OpenAI because he didn’t trust a safety project he didn’t have his hands on himself; then later he realized OpenAI wasn’t working as he hoped, so he drifted away to focus on Neuralink.
One of his main steps was founding OpenAI, whose charter looks like a questionable decision now from an AI Safety standpoint (as they push capabilities of language models and reinforcement learning forward, while driving their original safety team away) and looked fishy to me even at the time (simply because more initiatives make coordination harder).
I agree that Musk takes AI risk seriously, and I understand the “try something” mentality. But I suspect he founded OpenAI because he didn’t trust a safety project he didn’t have his hands on himself; then later he realized OpenAI wasn’t working as he hoped, so he drifted away to focus on Neuralink.