Musk is also in discussions with a number of investors in SpaceX and Tesla about putting money into his new venture, said a person with direct knowledge of the talks. “A bunch of people are investing in it . . . it’s real and they are excited about it,” the person said. ... Musk recently changed the name of Twitter to X Corp in company filings, as part of his plans to create an “everything app” under the brand “X”. For the new project, Musk has secured thousands of high-powered GPU processors from Nvidia, said people with knowledge of the move. … During a Twitter Spaces interview this week, Musk was asked about a Business Insider report that Twitter had bought as many as 10,000 Nvidia GPUs, “It seems like everyone and their dog is buying GPUs at this point,” Musk said. “Twitter and Tesla are certainly buying GPUs.” People familiar with Musk’s thinking say his new AI venture is separate from his other companies, though it could use Twitter content as data to train its language model and tap Tesla for computing resources.
According to xAI website, the initial team is composed of
and they are “advised by Dan Hendrycks, who currently serves as the director of the Center for AI Safety.”
According to reports xAI will seek to create a “maximally curious” AI, and this also seems to be the main new idea how to solve safety, with Musk explaining: “If it tried to understand the true nature of the universe, that’s actually the best thing that I can come up with from an AI safety standpoint,” … “I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity.”
My personal comments:
Sorry, but at face value, this just does not seem a great plan from safety perspective. Similarly to Elon Musk’s previous big bet how to make us safe by making AI open-source and widely distributed (“giving everyone access to new ideas”).
Sorry, but given “Center for AI Safety” moves to put them into some sort of “Center”, public representative position of AI Safety—including the name choice, and organizing the widely reported Statement on AI risk—it seems publicly associating their brand with xAI is a strange choice.
Elon Musk announces xAI
Link post
Some quotes & few personal opinions:
FT reports
According to xAI website, the initial team is composed of
Elon Musk
Igor Babuschkin
Manuel Kroiss
Yuhuai (Tony) Wu
Christian Szegedy
Jimmy Ba
Toby Pohlen
Ross Nordeen
Kyle Kosic
Greg Yang
Guodong Zhang
Zihang Dai
and they are “advised by Dan Hendrycks, who currently serves as the director of the Center for AI Safety.”
According to reports xAI will seek to create a “maximally curious” AI, and this also seems to be the main new idea how to solve safety, with Musk explaining: “If it tried to understand the true nature of the universe, that’s actually the best thing that I can come up with from an AI safety standpoint,” … “I think it is going to be pro-humanity from the standpoint that humanity is just much more interesting than not-humanity.”
My personal comments:
Sorry, but at face value, this just does not seem a great plan from safety perspective. Similarly to Elon Musk’s previous big bet how to make us safe by making AI open-source and widely distributed (“giving everyone access to new ideas”).
Sorry, but given “Center for AI Safety” moves to put them into some sort of “Center”, public representative position of AI Safety—including the name choice, and organizing the widely reported Statement on AI risk—it seems publicly associating their brand with xAI is a strange choice.