(I think he frames it with him as the main person that steers the tech rather than an organisation or humanity steering the tech—that’s how it feels for me, the way he behaves.)
They released a big LLM, the “Grok”. With their crew of stars I hoped for a more interesting direction, but an LLM as a start is not unreasonable (one does need a performant LLM as a component).
I think he frames it with him as the main person that steers the tech
Yeah… I thought he deferred to Ilya and to the new “superalignment team” Ilya has been co-leading safety-wise...
But perhaps he was not doing that consistently enough...
They released a big LLM, the “Grok”. With their crew of stars I hoped for a more interesting direction, but an LLM as a start is not unreasonable (one does need a performant LLM as a component).
I haven’t played around with Grok so I’m not sure how capable or safe it is. But I hope Elon and his team of experts gets the safety problem right—as he has created companies with extraordinary achievements. At least, Elon have demonstrated his aspirations to better humanity in other fields of sciences (Internet /Satellites, Space Exploration and EVs) and hope it translate to xAI and twitter.
Yeah… I thought he deferred to Ilya and to the new “superalignment team” Ilya has been co-leading safety-wise...
I felt different about Ilya co-leading, this seems to me that there’s something happening inside OpenAI. When Ilya needed to co-lead the new safety direction this felt like: “something feels weird inside OpenAI and Ilya needed to co-lead the safety direction.” So maybe the announcement today is related to that too.
Pretty sure there will new info from OpenAI next week or two weeks from now. Hoping it favors more safety directions—long term.
I haven’t played around with Grok so I’m not sure how capable or safe it is.
I expect safety of that to be at zero (they don’t think GPT-3.5-level LLMs are a problem in this sense; besides they market it almost as an “anything goes, anti-censorship LLM”).
But that’s not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem… I hope when they come to that, to approaching AIs which can create better AIs, they’ll start taking safety seriously… Otherwise, we’ll be in trouble...
Ilya co-leading
I thought he was the appropriately competent person (he was probably the AI scientist #1 in the world). The right person for the most important task in the world...
And the “superalignment” team at OpenAI was… not very strong. The original official “superalignment” approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a, and it was obvious that his thinking was different from the previous OpenAI “superalignment” approach and much better (as in, “actually had a chance to succeed”)...
Of course, now, since it looks like the “coup” has mostly been his doing, I am less sure that this is the leadership OpenAI and OpenAI safety needs. The manner of that has certainly been too erratic. Safety efforts should not evoke the feel of “last minute emergency”...
But that’s not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem… I hope when they come to that, to approaching AIs which can create better AIs, they’ll start taking safety seriously… Otherwise, we’ll be in trouble...
Yeah, let’s see where will they steer Grok.
And the “superalignment” team at OpenAI was… not very strong. The original official “superalignment” approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a, and it was obvious that his thinking was different from the previous OpenAI “superalignment” approach and much better (as in, “actually had a chance to succeed”)...
Yeah I agree with your analysis with the superalignment agenda, I think it’s not a good use of the 20% of compute resources that they have. I even think the resource allocation of 20% on AI safety is not deep enough into the problem as I think a 100% allocation[1] is necessary.
I haven’t had much time studying Ilya, but I like the way he explains his arguments. I hope they (Ilya, the board and Mira or new CEO) will be better at expanding the tech than Sam is. Let’s see.
I’m still figuring out Elon’s xAI.
But with regards with how Sam behaves—if he doesn’t improve his framing[1] of what AI could be for the future of humanity—I expect the same results.
(I think he frames it with him as the main person that steers the tech rather than an organisation or humanity steering the tech—that’s how it feels for me, the way he behaves.)
They released a big LLM, the “Grok”. With their crew of stars I hoped for a more interesting direction, but an LLM as a start is not unreasonable (one does need a performant LLM as a component).
Yeah… I thought he deferred to Ilya and to the new “superalignment team” Ilya has been co-leading safety-wise...
But perhaps he was not doing that consistently enough...
I haven’t played around with Grok so I’m not sure how capable or safe it is. But I hope Elon and his team of experts gets the safety problem right—as he has created companies with extraordinary achievements. At least, Elon have demonstrated his aspirations to better humanity in other fields of sciences (Internet /Satellites, Space Exploration and EVs) and hope it translate to xAI and twitter.
I felt different about Ilya co-leading, this seems to me that there’s something happening inside OpenAI. When Ilya needed to co-lead the new safety direction this felt like: “something feels weird inside OpenAI and Ilya needed to co-lead the safety direction.” So maybe the announcement today is related to that too.
Pretty sure there will new info from OpenAI next week or two weeks from now. Hoping it favors more safety directions—long term.
I expect safety of that to be at zero (they don’t think GPT-3.5-level LLMs are a problem in this sense; besides they market it almost as an “anything goes, anti-censorship LLM”).
But that’s not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem… I hope when they come to that, to approaching AIs which can create better AIs, they’ll start taking safety seriously… Otherwise, we’ll be in trouble...
I thought he was the appropriately competent person (he was probably the AI scientist #1 in the world). The right person for the most important task in the world...
And the “superalignment” team at OpenAI was… not very strong. The original official “superalignment” approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a, and it was obvious that his thinking was different from the previous OpenAI “superalignment” approach and much better (as in, “actually had a chance to succeed”)...
Of course, now, since it looks like the “coup” has mostly been his doing, I am less sure that this is the leadership OpenAI and OpenAI safety needs. The manner of that has certainly been too erratic. Safety efforts should not evoke the feel of “last minute emergency”...
At least it refuses to give you instructions for making cocaine.
Well. If nothing else, the sass is refreshing after the sycophancy of all the other LLMs.
That’s good! So, at least a bit of safety fine-tuning is there...
Good to know...
Yeah, let’s see where will they steer Grok.
Yeah I agree with your analysis with the superalignment agenda, I think it’s not a good use of the 20% of compute resources that they have. I even think the resource allocation of 20% on AI safety is not deep enough into the problem as I think a 100% allocation[1] is necessary.
I haven’t had much time studying Ilya, but I like the way he explains his arguments. I hope they (Ilya, the board and Mira or new CEO) will be better at expanding the tech than Sam is. Let’s see.
I think the safest AI will be the most profitable technoloy as everyone will want to promote and build on top of it.